首页 hive面试题汇总
文章
取消

hive面试题汇总

题目

问题描述

问题

数据准备

1

题目解答

1

题目七

问题描述

问题

数据准备

1

题目解答

1

题目一

问题描述

美团面试题:现有用户登陆表 t_login(user_id, login_date),求连续登陆三天及以上的用户。

数据准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE TABLE `t_login`(
    `user_id` string,
    `login_date` string COMMENT 'YYYY-MM-DD'
)
;

WITH t_login AS (
    SELECT '001' AS user_id, '2020-12-01' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-02' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-03' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-04' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-06' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-07' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-08' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-11' AS login_date UNION ALL
    SELECT '001' AS user_id, '2020-12-12' AS login_date UNION ALL
    SELECT '002' AS user_id, '2020-11-29' AS login_date UNION ALL
    SELECT '002' AS user_id, '2020-11-30' AS login_date UNION ALL
    SELECT '002' AS user_id, '2020-12-01' AS login_date UNION ALL
    SELECT '002' AS user_id, '2020-12-04' AS login_date UNION ALL
    SELECT '002' AS user_id, '2020-12-06' AS login_date
)

题目解答

这道题目的难点是连续登陆天数,怎样去求连续登陆的日期。一开始的想法是按照每个人的登陆日期排序,然后用 LAG 函数错开一天的时间做减法,结果是 1 的那些天就是连续登陆的天数。后来细细想了一下这么弄也不行,还得找到一共有多少个连续的 1 。后来又想到了按照每个人的登陆日期排序,因为连续登陆日期是按照 1 来递增的并且序号也是按照 1 来递增的,所以用登陆日期减去这个序号得到相同的结果的总数就是连续登陆的天数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
SELECT
    user_id
FROM(
    SELECT
        user_id,
        DATE_SUB(login_date, rn) AS new_date
    FROM(
        SELECT
            user_id,
            login_date,
            ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY login_date) AS rn
        FROM t_login
    ) t
) t
GROUP BY
    user_id,
    new_date
HAVING COUNT(*) >= 3
;

题目拓展

这道题目很容易就想到连续登陆的开始时间和结束时间,顺便也把这一部分补充上去,求最大连续登陆天数以及开始日期和结束日期。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
SELECT
    user_id,
    start_date,
    end_date,
    login_days
FROM(
    SELECT
        user_id,
        new_date,
        start_date,
        end_date,
        login_days,
        ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY login_days DESC, start_date DESC) AS rn
    FROM(
        SELECT
            user_id,
            new_date,
            min(login_date) AS start_date,
            max(login_date) AS end_date,
            coumt(*) AS login_days
        FROM(
            SELECT
                user_id,
                login_date,
                DATE_SUB(login_date, rn - 1) AS new_date
            FROM(
                SELECT
                    user_id,
                    login_date,
                    ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY login_date) AS rn
                FROM t_login
            ) t
        ) t
        GROUP BY
            user_id,
            new_date
    )t
)t
where rn = 1
;

题目二

问题描述

美团面试题:根据原始数据得到目标数据。

原始数据:

IDNAMEPARENT_ID
1北京市0
2山东省0
3昌平区1
4海淀区1
5沙河镇3
6马池口镇3
7中关村4
8上地4
9烟台市2
10青岛市2
11牟平区9
12即墨区10

目标数据:

一级地名二级地名三级地名
北京市昌平区沙河镇
北京市昌平区马池口镇
北京市海淀区中关村
北京市海淀区上地
山东省烟台市牟平区
山东省青岛市即墨区

数据准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
WITH location_info AS(
    SELECT 1  AS ID, '北京市'   AS NAME, 0  AS PARENT_ID UNION ALL
    SELECT 2  AS ID, '山东省'   AS NAME, 0  AS PARENT_ID UNION ALL
    SELECT 3  AS ID, '昌平区'   AS NAME, 1  AS PARENT_ID UNION ALL
    SELECT 4  AS ID, '海淀区'   AS NAME, 1  AS PARENT_ID UNION ALL
    SELECT 5  AS ID, '沙河镇'   AS NAME, 3  AS PARENT_ID UNION ALL
    SELECT 6  AS ID, '马池口镇' AS NAME, 3  AS PARENT_ID UNION ALL
    SELECT 7  AS ID, '中关村'   AS NAME, 4  AS PARENT_ID UNION ALL
    SELECT 8  AS ID, '上地'     AS NAME, 4  AS PARENT_ID UNION ALL
    SELECT 9  AS ID, '烟台市'   AS NAME, 2  AS PARENT_ID UNION ALL
    SELECT 10 AS ID, '青岛市'   AS NAME, 2  AS PARENT_ID UNION ALL
    SELECT 11 AS ID, '牟平区'   AS NAME, 9  AS PARENT_ID UNION ALL
    SELECT 11 AS ID, '即墨区'   AS NAME, 10 AS PARENT_ID
)

题目解答

这是一道很基本的 SQL 题目,通过多次关联自己及可得到目标结果。

1
2
3
4
5
6
7
8
9
SELECT
    t1.name AS `一级地名`,
    t2.name AS `二级地名`,
    t3.name AS `三级地名`
FROM location_info t1
LEFT JOIN location_info t2 on t1.id = t2.parent_id
LEFT JOIN location_info t3 on t2.id = t3.parent_id
WHERE t1.parent_id = 0
;

题目三

问题描述

美团面试题:根据原始数据得到目标数据。

原始数据:

IDSTUDENTSUBJECT
1裘容絮语文,数学,英语
2冼殿数学,化学

目标数据:

IDSTUDENTSUBJECT
1裘容絮语文
1裘容絮数学
1裘容絮英语
2冼殿数学
2冼殿化学

数据准备

1
2
3
4
WITH sutdent_subject AS(
    SELECT 1 AS ID, '裘容絮' AS STUDENT, '语文,数学,英语' AS SUBJECT UNION ALL
    SELECT 1 AS ID, '冼殿'   AS STUDENT, '数学,化学' AS SUBJECT
)

题目解答

这也是一道很基本的题目,考察 lateral view 和 explode。

1
2
3
4
5
6
SELECT
    id,
    student,
    new_subject as subject
FROM sutdent_subject
LATERAL VIEW EXPLODE(SPLIT(subject, ',')) table_subject as new_subject

题目四

问题描述

美团面试题:现有表 user_label(user_id, label, result),求 result 的结果是 1 并且在 label 中存在的用户以及 label。

数据准备

1
2
3
4
5
6
7
8
9
10
CREATE TABLE `user_label`(
    `user_id` string,
    `label` string COMMENT '1001,1002,1003,1004',
    `result` string COMMENT '1001:1,1002:2,1005:5'
)
;

with user_label as(
    select 'ABC' as user_id,'1001,1002,1003,1004' as label,'1001:1,1002:2,1005:5' as result
)

题目解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
t_table
userid,
label 100110021003  
result 1001:1,1002:2,1003:1,1004:1

result 1  同时lable里出现过的


1001 1003

select
    user_id,
    new_label,
    new_result
FROM(
    select
        user_id,
        new_label,
        new_result
    FROM user_label
    LATERAL VIEW EXPLODE(SPLIT(label,  ',')) table_label  AS new_label
    LATERAL VIEW EXPLODE(SPLIT(result, ',')) table_result AS new_result
) t
where SPLIT(new_result, ':')[1] = 1
    and new_label = SPLIT(new_result, ':')[0]
;

题目五

问题描述

现有用户表 user_info(user_id, user_name), 授信表 user_credit(user_id, credit_amount), 借款表 user_loan(user_id, loan_id, loan_amount)。其中用户表和授信表是一对一的关系,用户表和借款表是一对多的关系,求在借款的用户中,平均借款金额和平均授信额度。

数据准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
CREATE TABLE `user_info`(
    `user_id` string,
    `user_name` string
)
;

CREATE TABLE `user_credit`(
    `user_id` string,
    `credit_amount` double
)
;

CREATE TABLE `user_loan`(
    `user_id` string,
    `loan_id` string,
    `loan_amount` double
)
;


题目解答

一看题目,很常规的指标于是就写下了如下 SQL:

1
2
3
4
5
6
SELECT
    AVG(t2.credit_amount) AS avg_credit_amount,
    AVG(t3.loan_amount) AS avg_loan_amount
FROM user_info t1
LEFT JOIN user_credit t2 ON t1.user_id = t2.user_id
LEFT JOIN user_loan t3 ON t1.user_id = t3.user_id

这个看似简单,用用户表去关联授信表和借款表,然后用函数一求平均值就可以了,然而上面这么写是错误的!看似简单的背后总会有大大小小的坑。每个人只有一个授信额度,但是可以有多比借款,如果不加处理直接关联的话会导致每一个借据后面都会跟一个授信额度,导致算平均值的时候数据虚增。

1
2
3
4
5
6
7
8
9
10
11
12
13
-- 平均授信额
SELECT
    SUM(t2.credit_amount) / COUNT(DISTINCT t1.user_id) AS avg_credit_amount
FROM user_info t1
LEFt JOIN user_credit t2 on t1.user_id = t2.user_id
;

-- 平均借款额
SELECT
    SUM(t2.loan_amount) / COUNT(DISTINCT t1.user_id) as avg_loan_amount
FROM user_info t1
LEFt JOIN user_loan t2 on t1.user_id = t2.user_id
;

题目六

问题描述

快手面试题:现有城市网吧访问数据表 netbar_log(netbar_id, user_id, online_time, offline_time),求该城市上网用户中两人一定认识的组合数。

  • 规则1:如果有两个用户在一家网吧的前后上下线时间在10分钟以内,则两人可能认识

  • 规则2:如果这两个用户在三家以上网吧出现【规则1】的情况,则两人一定认识

数据准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE TABLE `netbar_log`(
    `netbar_id` bigint COMMENT '网吧ID',
    `user_id` bigint COMMENT '访客ID(身份证号)',
    `online_time` string COMMENT '上线时间 YYYY-MM-DD HH:mm:ss',
    `offline_time` string COMMENT '下线时间 YYYY-MM-DD HH:mm:ss'
)
;

WITH netbar_log AS(
    SELECT 1 AS netbar_id, 1 AS user_id, '2020-12-01 10:11:12' AS online_time, '2020-12-01 12:13:14' AS offline_time UNION ALL
    SELECT 1 AS netbar_id, 2 AS user_id, '2020-12-01 10:11:12' AS online_time, '2020-12-01 12:13:14' AS offline_time UNION ALL
    SELECT 1 AS netbar_id, 3 AS user_id, '2020-12-01 10:11:12' AS online_time, '2020-12-01 12:13:14' AS offline_time UNION ALL
    SELECT 2 AS netbar_id, 1 AS user_id, '2020-12-02 10:11:12' AS online_time, '2020-12-02 12:13:14' AS offline_time UNION ALL
    SELECT 2 AS netbar_id, 2 AS user_id, '2020-12-02 10:11:12' AS online_time, '2020-12-02 12:13:14' AS offline_time UNION ALL
    SELECT 2 AS netbar_id, 3 AS user_id, '2020-12-02 10:11:12' AS online_time, '2020-12-02 12:13:14' AS offline_time UNION ALL
    SELECT 3 AS netbar_id, 1 AS user_id, '2020-12-03 10:11:12' AS online_time, '2020-12-03 12:13:14' AS offline_time UNION ALL
    SELECT 3 AS netbar_id, 2 AS user_id, '2020-12-03 10:11:12' AS online_time, '2020-12-03 12:13:14' AS offline_time UNION ALL
    SELECT 3 AS netbar_id, 3 AS user_id, '2020-12-03 10:11:12' AS online_time, '2020-12-03 12:13:14' AS offline_time UNION ALL
    SELECT 4 AS netbar_id, 1 AS user_id, '2020-12-04 10:11:12' AS online_time, '2020-12-04 12:13:14' AS offline_time UNION ALL
    SELECT 4 AS netbar_id, 2 AS user_id, '2020-12-04 10:11:12' AS online_time, '2020-12-04 12:13:14' AS offline_time UNION ALL
    SELECT 4 AS netbar_id, 3 AS user_id, '2020-12-04 10:11:12' AS online_time, '2020-12-04 12:13:14' AS offline_time
)

题目解答

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
WITH bar_1 AS (
    SELECT
        netbar_id,
        user_id,
        unix_timestamp(online_time) AS online_timestamp,
        unix_timestamp(offline_time) AS offline_timestamp
    FROM netbar_log
),
bar_2 AS (
    SELECT
        netbar_id,
        user_id,
        unix_timestamp(online_time) AS online_timestamp,
        unix_timestamp(offline_time) AS offline_timestamp
    FROM netbar_log
)

SELECT
    user_id_1,
    user_id_2,
    COUNT(DISTINCT netbar_id) AS ct
FROM (
    SELECT
        t1.netbar_id,
        t1.user_id AS user_id_1,
        t2.user_id AS user_id_2
    FROM bar_1 t1
    JOIN bar_2 t2 ON t1.netbar_id = t2.netbar_id
        AND t1.user_id > t2.user_id
        AND (ABS(t1.online_timestamp - t2.online_timestamp) <= 600 OR ABS(t2.offline_timestamp - t2.offline_timestamp) <= 600)
) t
GROUP BY
    user_id_1,
    user_id_2
HAVING ct >= 3
;

题目十一

问题描述

快手面试题:现在有全国从 1980~2020 年全国人民每人每年的收入表 people_income(id, year, income),求每年收入前一万的人的 id。

数据准备

1
2
3
4
5
CREATE TABLE `people_income`(
  `id` string,
  `year` string,
  `income` double
);

题目解答

一看题目,分组求 top 10000 的问题,觉着很简单于是乎就写下了如下的答案:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
select
    id,
    year,
    income
from(
    select
        id,
        year,
        income,
        row_number() over(partition by year order by income desc) as rn
    from people_income
)t
where rn <= 10000
;

很显然上面的 SQL 运行出来之后的结果是正确的,但是想一想题目的前提,全国人民 40 年的数据,按照平均每年 10 亿人,也有 400 亿的数据量。按年分组,对 10 亿人排序的话,现在的机器基本上就是跑不出来。

于是就有了如下优化过的 SQL,思路是先求出来每年人民年收入的平均数,然后过滤掉低于平均数的数据,然后再通过随机数函数把数据打散,每组求 top 10000 之后再将各个组的数据合并求出最终的 top 10000。其实这里也可以不求平均数,直接分组排序过滤,具体效率相差多少没有研究,索性就把想到的都写出来了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
with avg_income as (
    select
        year,
        sum(income) / sum(people_num) as avg_inc
    from(
        select
            year,
            rand_number,
            sum(income) as income,
            count(*) as people_num
        from(
            select
                year,
                income,
                cast(rand() * 100 as int) as rand_number
            from people_income
        )t
        group by
            year,
            rand_number
    )t
    group by
        year
)

select
    id,
    year,
    income
from(
    select
        id,
        year,
        income,
        row_number() over(partition by year order by income desc) as rn
    from(
        select
            id,
            year,
            income,
            row_number() over(partition by year,rand_number order by income desc) as rn
        from(
            select
                t1.id,
                t1.year,
                t1.income,
                cast(rand() * 1000 as int) as rand_number
            from people_income t1
            left join avg_income t2 on t1.year = t2.year
            where t1.income > t2.avg_inc
        )t
    )t
    where rn <= 10000
)t
where rn <= 10000
;

在网上看到了另外一种解法,把 ID 当成了身份证号,按照身份证号的前四位做分区,思路也很棒。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
SELECT 
    id,
    year,
    income
FROM (
    SELECT
        id,
        year,
        income,
        ROW_NUMBER() OVER(PARTITION BY year ORDER BY income DESC) AS rn
    FROM (
        SELECT
            id,
            year,
            income,
            ROW_NUMBER() OVER(PARTITION BY year, SUBSTR(id, 1, 4) ORDER BY income DESC) AS rn
        FROM people_income
    )t
    WHERE rn <= 10000
)t
WHERE rn <= 10000
;

题目十二

问题描述

快手面试题:现有大学考试成绩表 score(id, subject, score),找出每一科都是这一个科前 30% 的同学。

数据准备

1
2
3
4
5
CREATE TABLE `people_income`(
  `id` string,
  `subject` string,
  `score` double
);

题目解答

这道题吧,乍一看就特别像大学时候学数据库原理是学生选课的那个例子,一开始以为很简单,但是求前 30% 还是被难住了,最后发现之前总结过一个很少用的一个函数,顿时体会到了 Hive 的博大精深。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
-- 成绩在该科成绩前30%的同学
select distinct
    id
from(
    select
        id,
        subject,
        score,
        cume_dist() over(partition by subject order by score desc) as score_recent
    from a
)t1
left join(
    -- 有成绩不在该科成绩前30%的同学
    select
        id
    from(
        select
            id,
            subject,
            score,
            cume_dist() over(partition by subject order by score) as socre_recent
        from a
    )t
    where score_recent < 0.7
    group By
        id
)t2 on t1.id = t2.id
where score_recent > 0.7
    and t2.id is null
;



SELECT
    t1.id
FROM a t1
LEFT JOIN (
    -- 有成绩不在该科成绩前30%的同学
    SELECT distinct
        id
    FROM(
        SELECT
            id,
            subject,
            score,
            cume_dist() over(partition by subject order by score desc) as socre_recent
        FROM a
    )t
    WHERE score_recent <= 0.7
)t2 ON t1.id = t2.id
WHERE t2.id IS NULL

url

hive之经典sql面试题

经典Hive SQL面试题

Hive面试题收集

Hive 原理及查询优化

Apache Hive 中文手册

Hive知识归纳——详解 hive 各个知识点

  • 有十万个淘宝店铺,每个顾客访问任意一个店铺时都会生成一条访问日志。访问日志存储表为 Visit,其中访问用户ID字段名称为 uid,访问的店铺字段名称为store,请统计每个店铺的UV。
1
SELECT store, COUNT(DISTINCT uid) AS uv FROM visit GROUP BY store;
  • 有一亿个用户,被存储于表 Users 中,其中有用户唯一字段 UID,用户年龄 age 和用户消费总金额 total,请以代码或技术方案阐述的方式分别用sql(Hive 或 Spark Sql)和 Spark 按照用户年龄从大到小进行排序,如果年龄相同,则按照总消费金额从小到大排序。
1
SELECT * FROM users ORDER BY age DESC, total;
  • 当前有用户人生阶段表 LifeStage,有用户唯一 ID 字段 UID,用户人生阶段字段 stage,其中 stage 字段内容为各个人生阶段标签按照英文逗号分隔的拼接内容,如:计划买车,已买房,并且每个用户的内容不同,请使用 Hive Sql 统计每个人生阶段的用户量。
1
SELECT stage,COUNT(*) FROM lifestage LATEAR VIEW EXPLODE(SPLIT(stage, ',')) f as stage;
本文由作者按照 CC BY 4.0 进行授权