1 Star 0 Fork 0

ConstantineKang / hive

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0
How to Hive
-----------

DISCLAIMER: This is a prototype version of Hive and is NOT production 
quality. This is provided mainly as a way of illustrating the capabilities 
of Hive and is provided as-is. However - we are working hard to make 
Hive a production quality system. Hive has only been tested on unix(linux) 
and mac systems using Java 1.6 for now - although it may very well work 
on other similar platforms. It does not work on Cygwin right now. Most of 
our testing has been on Hadoop 0.17 - so we would advise running it against 
this version of hadoop - even though it may compile/work against other versions

Useful mailing lists
--------------------

1. hive-user@hadoop.apache.org - To discuss and ask usage questions. Send an 
   empty email to hive-user-subscribe@hadoop.apache.org in order to subscribe 
   to this mailing list.

2. hive-core@hadoop.apache.org - For discussions around code, design and
   features. Send an empty email to hive-core-subscribe@hadoop.apache.org in
   order to subscribe to this mailing list.

3. hive-commits@hadoop.apache.org - In order to monitor the commits to the 
   source repository. Send an empty email to hive-commits-subscribe@hadoop.apache.org in
   order to subscribe to this mailing list.
 
Downloading and building
------------------------

- svn co http://svn.apache.org/repos/asf/hadoop/hive/trunk hive_trunk
- cd hive_trunk
- hive_trunk> ant -Dtarget.dir=<your-install-dir> -Dhadoop.version='0.17.0' package

You can replace 0.17.0 with 0.18.1, 0.19.0 etc to match the version of hadoop
that you are using.

In the rest of the README, we use dist and <install-dir> interchangeably.

Running Hive
------------

Hive uses hadoop that means:
- you must have hadoop in your path OR
- export HADOOP=<hadoop-install-dir>/bin/hadoop

To use hive command line interface (cli) from the shell:
$ cd <install-dir>
$ bin/hive

Using Hive
----------

Configuration management overview
---------------------------------

- hive configuration is stored in <install-dir>/conf/hive-default.xml 
  and log4j in hive-log4j.properties
- hive configuration is an overlay on top of hadoop - meaning the 
  hadoop configuration variables are inherited by default.
- hive configuration can be manipulated by:
  o editing hive-default.xml and defining any desired variables 
    (including hadoop variables) in it
  o from the cli using the set command (see below) 
  o by invoking hive using the syntax:
     * bin/hive -hiveconf x1=y1 -hiveconf x2=y2
    this sets the variables x1 and x2 to y1 and y2


Error Logs
----------
Hive uses log4j for logging. By default logs are not emitted to the 
console by the cli. They are stored in the file:
- /tmp/{user.name}/hive.log

If the user wishes - the logs can be emitted to the console by adding 
the arguments shown below:
- bin/hive -hiveconf hive.root.logger=INFO,console

Note that setting hive.root.logger via the 'set' command does not 
change logging properties since they are determined at initialization time.

Error logs are very useful to debug problems. Please send them with 
any bugs (of which there are many!) to the JIRA at
https://issues.apache.org/jira/browse/HIVE.


DDL Operations
--------------

Creating Hive tables and browsing through them

hive> CREATE TABLE pokes (foo INT, bar STRING);

Creates a table called pokes with two columns, first being an 
integer and other a string columns

hive> CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING);

Creates a table called invites with two columns and a partition column 
called ds. The partition column is a virtual column.  It is not part 
of the data itself, but is derived from the partition that a particular 
dataset is loaded into.


By default tables are assumed to be of text input format and the 
delimiters are assumed to be ^A(ctrl-a). We will be soon publish additional 
commands/recipes to add binary (sequencefiles) data and configurable 
delimiters etc.

hive> SHOW TABLES;

lists all the tables

hive> SHOW TABLES '.*s';

lists all the table that end with 's'. The pattern matching follows Java 
regular expressions. Check out this link for documentation 
http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html

hive> DESCRIBE invites;

shows the list of columns

hive> DESCRIBE EXTENDED invites;

shows the list of columns plus any other meta information about the table

Altering tables. Table name can be changed and additional columns can be dropped

hive> ALTER TABLE pokes ADD COLUMNS (new_col INT);
hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');
hive> ALTER TABLE pokes REPLACE COLUMNS (c1 INT, c2 STRING);
hive> ALTER TABLE events RENAME TO 3koobecaf;

Dropping tables
hive> DROP TABLE pokes;


Metadata Store
--------------

Metadata is in an embedded Derby database whose location is determined by the 
hive configuration variable named javax.jdo.option.ConnectionURL. By default 
(see conf/hive-default.xml) - this location is ./metastore_db

Right now - in the default configuration, this metadata can only be seen by 
one user at a time. 

Metastore can be stored in any database that is supported by JPOX. The 
location and the type of the RDBMS can be controlled by the two variables 
'javax.jdo.option.ConnectionURL' and 'javax.jdo.option.ConnectionDriverName'. 
Refer to JDO (or JPOX) documentation for more details on supported databases. 
The database schema is defined in JDO metadata annotations file package.jdo 
at src/contrib/hive/metastore/src/model.

In the future - the metastore itself can be a standalone server.


DML Operations
--------------

Loading data from flat files into Hive

hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes; 

Loads a file that contains two columns separated by ctrl-a into pokes table. 
'local' signifies that the input file is on the local system. If 'local' 
is omitted then it looks for the file in HDFS.

The keyword 'overwrite' signifies that existing data in the table is deleted. 
If the 'overwrite' keyword is omitted - then data files are appended to existing data sets.

NOTES:
- NO verification of data against the schema
- if the file is in hdfs it is moved into hive controlled file system namespace. 
  The root of the hive directory is specified by the option hive.metastore.warehouse.dir 
  in hive-default.xml. We would advise that this directory be pre-existing before 
  trying to create tables via Hive.

hive> LOAD DATA LOCAL INPATH './examples/files/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15');
hive> LOAD DATA LOCAL INPATH './examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-08');

The two LOAD statements above load data into two different partitions of the table
invites. Table invites must be created as partitioned by the key ds for this to succeed.

Loading/Extracting data using Queries
-------------------------------------

Runtime configuration
---------------------

- Hives queries are executed using map-reduce queries and as such the behavior 
  of such queries can be controlled by the hadoop configuration variables

- The cli can be used to set any hadoop (or hive) configuration variable. For example:
   o hive> SET mapred.job.tracker=myhost.mycompany.com:50030
   o hive> SET - v 
  The latter shows all the current settings. Without the v option only the 
  variables that differ from the base hadoop configuration are displayed
- In particular the number of reducers should be set to a reasonable number 
  to get good performance (the default is 1!)


EXAMPLE QUERIES
---------------

Some example queries are shown below. More are available in the hive code:
ql/src/test/queries/{positive,clientpositive}.

SELECTS and FILTERS
-------------------

hive> SELECT a.foo FROM invites a;

select column 'foo' from all rows of invites table. The results are not
stored anywhere, but are displayed on the console.

Note that in all the examples that follow, INSERT (into a hive table, local 
directory or HDFS directory) is optional. 

hive> INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a;

select all rows from invites table into an HDFS  directory. The result data 
is in files (depending on the number of mappers) in that directory.
NOTE: partition columns if any are selected by the use of *. They can also 
be specified in the projection clauses.

hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/local_out' SELECT a.* FROM pokes a;

Select all rows from pokes table into a local directory

hive> INSERT OVERWRITE TABLE events SELECT a.* FROM profiles a;
hive> INSERT OVERWRITE TABLE events SELECT a.* FROM profiles a WHERE a.key < 100; 
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/reg_3' SELECT a.* FROM events a;
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_4' select a.invites, a.pokes FROM profiles a;
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_5' SELECT COUNT(1) FROM invites a;
hive> INSERT OVERWRITE DIRECTORY '/tmp/reg_5' SELECT a.foo, a.bar FROM invites a;
hive> INSERT OVERWRITE LOCAL DIRECTORY '/tmp/sum' SELECT SUM(a.pc) FROM pc1 a;

Sum of a column. avg, min, max can also be used

NOTE: there are some flaws with the type system that cause doubles to be 
returned with integer types would be expected. We expect to fix these in the coming week.

GROUP BY
--------

hive> FROM invites a INSERT OVERWRITE TABLE events SELECT a.bar, count(1) WHERE a.foo > 0 GROUP BY a.bar;
hive> INSERT OVERWRITE TABLE events SELECT a.bar, count(1) FROM invites a WHERE a.foo > 0 GROUP BY a.bar;

NOTE: Currently Hive always uses two stage map-reduce for groupby operation. This is 
to handle skews in input data. We will be optimizing this in the coming weeks.

JOIN
----

hive> FROM pokes t1 JOIN invites t2 ON (t1.bar = t2.bar) INSERT OVERWRITE TABLE events SELECT t1.bar, t1.foo, t2.foo

MULTITABLE INSERT
-----------------
FROM src
INSERT OVERWRITE TABLE dest1 SELECT src.* WHERE src.key < 100
INSERT OVERWRITE TABLE dest2 SELECT src.key, src.value WHERE src.key >= 100 and src.key < 200
INSERT OVERWRITE TABLE dest3 PARTITION(ds='2008-04-08', hr='12') SELECT src.key WHERE src.key >= 200 and src.key < 300
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/dest4.out' SELECT src.value WHERE src.key >= 300

STREAMING
---------
hive> FROM invites a INSERT OVERWRITE TABLE events 
    > MAP a.foo, a.bar USING '/bin/cat' 
    > AS oof, rab WHERE a.ds > '2008-08-09';

This streams the data in the map phase through the script /bin/cat (like hadoop streaming).
Similarly - streaming can be used on the reduce side. Please look for files mapreduce*.q.

KNOWN BUGS/ISSUES
-----------------
* hive cli may hang for a couple of minutes because of a bug in getting metadata
  from the derby database. let it run and you'll be fine!
* hive cli creates derby.log in the directory from which it has been invoked.
* COUNT(*) does not work for now. Use COUNT(1) instead.
* ORDER BY not supported yet.
* CASE not supported yet.
* Only string and thrift types (http://developers.facebook.com/thrift) have been tested.
* When doing Join, please put the table with big number of rows containing the same join key to
the rightmost in the JOIN clause. Otherwise we may see OutOfMemory errors.

FUTURE FEATURES
---------------
* EXPLODE function to generate multiple rows from a column of list type.
* Table statistics for query optimization.

Developing Hive using Eclipse
------------------------
1. Follow the 3 steps in "Downloading and building" section above

2. Change the first line in conf/hive-log4j.properties to the following
 line to see error messages on the console.
hive.root.logger=INFO,console

3. Run tests to make sure everything works.  It may take 20 minutes.
ant -Dhadoop.version='0.17.0' -logfile test.log test"

4. Create an empty java project in Eclipse and close it.

5. Add the following section to Eclipse project's .project file:
	<linkedResources>
		<link>
			<name>cli_src_java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/cli/src/java</location>
		</link>
		<link>
			<name>common_src_java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/common/src/java</location>
		</link>
		<link>
			<name>metastore_src_gen-javabean</name>
			<type>2</type>
			<location>/xxx/hive_trunk/metastore/src/gen-javabean</location>
		</link>
		<link>
			<name>metastore_src_java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/metastore/src/java</location>
		</link>
		<link>
			<name>metastore_src_model</name>
			<type>2</type>
			<location>/xxx/hive_trunk/metastore/src/model</location>
		</link>
		<link>
			<name>ql_src_java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/ql/src/java</location>
		</link>
		<link>
			<name>serde_src_gen-java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/serde/src/gen-java</location>
		</link>
		<link>
			<name>serde_src_java</name>
			<type>2</type>
			<location>/xxx/hive_trunk/serde/src/java</location>
		</link>
	</linkedResources>

6. Add the following list to the Eclipse project's .classpath file:
	<classpathentry kind="src" path="src"/>
	<classpathentry kind="src" path="metastore_src_model"/>
	<classpathentry kind="src" path="metastore_src_gen-javabean"/>
	<classpathentry kind="src" path="serde_src_gen-java"/>
	<classpathentry kind="src" path="cli_src_java"/>
	<classpathentry kind="src" path="ql_src_java"/>
	<classpathentry kind="src" path="metastore_src_java"/>
	<classpathentry kind="src" path="serde_src_java"/>
	<classpathentry kind="src" path="common_src_java"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/asm-3.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/commons-cli-2.0-SNAPSHOT.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/commons-collections-3.2.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/commons-lang-2.4.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/commons-logging-1.0.4.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/commons-logging-api-1.0.4.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/derby.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/jdo2-api-2.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/jpox-core-1.2.2.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/jpox-enhancer-1.2.2.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/jpox-rdbms-1.2.2.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/libfb303.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/libthrift.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/log4j-1.2.15.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/lib/velocity-1.5.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/ql/lib/antlr-3.0.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/ql/lib/antlr-runtime-3.0.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/ql/lib/commons-jexl-1.1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/ql/lib/stringtemplate-3.1b1.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/cli/lib/jline-0.9.94.jar"/>
	<classpathentry kind="lib" path="/xxx/hive_trunk/build/hadoopcore/hadoop-0.19.0/hadoop-0.19.0-core.jar"/>

7. Try building hive inside Eclipse, and develop using Eclipse.


Development Tips
------------------------
* You may use the following line to test a specific testcase with a specific query file.
ant -Dhadoop.version='0.17.0' -Dtestcase=TestParse -Dqfile=udf4.q test
ant -Dhadoop.version='0.17.0' -Dtestcase=TestParseNegative -Dqfile=invalid_dot.q test
ant -Dhadoop.version='0.17.0' -Dtestcase=TestCliDriver -Dqfile=udf1.q test
ant -Dhadoop.version='0.17.0' -Dtestcase=TestNegativeCliDriver -Dqfile=invalid_tbl_name.q test
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

暂无描述 展开 收起
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/constantinekang/hive.git
git@gitee.com:constantinekang/hive.git
constantinekang
hive
hive
branch-0.2

搜索帮助

14c37bed 8189591 565d56ea 8189591