hadoop作业提交时可以指定相应的队列,例如:-Dmapred.job.queue.name=queue2
通过对mapred-queue-acls.xml和mapred-site.xml配置可以对不同的队列实现不同用户的提交权限.
先编辑mapred-site.xml,修改配置如下(增加四个队列):
-
<
property
>
-
<
name
>
mapred.queue.names
</
name
>
-
<
value
>
default,
queue1,queue2,queue3,queue4
</
value
>
-
<
description
>
Commaseparatedlistofqueuesconfiguredforthisjobtracker.
-
Jobsareaddedtoqueuesandschedulerscanconfiguredifferent
-
schedulingpropertiesforthevariousqueues.Toconfigureaproperty
-
foraqueue,thenameofthequeuemustmatchthenamespecifiedinthis
-
value.Queuepropertiesthatarecommontoallschedulersareconfigured
-
herewiththenamingconvention,mapred.queue.$QUEUE-NAME.$PROPERTY-NAME,
-
fore.g.mapred.queue.default.submit-job-acl.
-
Thenumberofqueuesconfiguredinthisparametercoulddependonthe
-
typeofschedulerbeingused,asspecifiedin
-
mapred.jobtracker.taskScheduler.Forexample,theJobQueueTaskScheduler
-
supportsonlyasinglequeue,whichisthedefaultconfiguredhere.
-
Beforeaddingmorequeues,ensurethatthescheduleryou'veconfigured
-
supportsmultiplequeues.
-
</
description
>
-
</
property
>
修改生效后通过jobtrack界面可以看到配置的队列信息:
要对队列进行控制, 还需要编辑mapred-queue-acls.xml文件
-
<
property
>
-
<
name
>
mapred.queue.
queue1
.acl-submit-job
</
name
>
-
<
value
>
''
</
value
>
-
<
description
>
Commaseparatedlistofuserandgroupnamesthatareallowed
-
tosubmitjobstothe'default'queue.Theuserlistandthegrouplist
-
areseparatedbyablank.Fore.g.user1,user2group1,group2.
-
Ifsettothespecialvalue'*',itmeansallusersareallowedto
-
submitjobs.Ifsetto''(i.e.space),nouserwillbeallowedtosubmit
-
jobs.
-
-
ItisonlyusedifauthorizationisenabledinMap/Reducebysettingthe
-
configurationpropertymapred.acls.enabledtotrue.
-
IrrespectiveofthisACLconfiguration,theuserwhostartedtheclusterand
-
clusteradministratorsconfiguredvia
-
mapreduce.cluster.administratorscansubmitjobs.
-
</
description
>
-
</
property
>
要配置多个队列, 只需要重复添加上面配置信息,修改队列名称和value值,为方便测试,queue1禁止所有用户向其提交作业.
要使该配置生效, 还需要修改mapred-site.xml,将mapred.acls.enabled值设置为true
-
<
property
>
-
<
name
>
mapred.acls.enabled
</
name
>
-
<
value
>
true
</
value
>
-
<
description
>
SpecifieswhetherACLsshouldbechecked
-
forauthorizationofusersfordoingvariousqueueandjobleveloperations.
-
ACLsaredisabledbydefault.Ifenabled,accesscontrolchecksaremadeby
-
JobTrackerandTaskTrackerwhenrequestsaremadebyusersforqueue
-
operationslikesubmitjobtoaqueueandkillajobinthequeueandjob
-
operationslikeviewingthejob-details(Seemapreduce.job.acl-view-job)
-
orformodifyingthejob(Seemapreduce.job.acl-modify-job)using
-
Map/ReduceAPIs,RPCsorviatheconsoleandwebuserinterfaces.
-
</
description
>
-
</
property
>
重启hadoop, 使配置生效, 接下来拿hive进行测试:
先使用queue2队列:
-
set
mapred.job.queue.name
=
queue2
;
-
hive
>
-
>
selectcount(*)fromt_aa_pc_log;
-
TotalMapReduce
jobs
=
1
-
LaunchingJob1outof1
-
Numberofreducetasksdeterminedatcompiletime:1
-
Inordertochangetheaverageloadforareducer(inbytes):
-
set
hive.exec.reducers.bytes.per.reducer
=
<
number
>
-
Inordertolimitthemaximumnumberofreducers:
-
set
hive.exec.reducers.max
=
<
number
>
-
Inordertosetaconstantnumberofreducers:
-
set
mapred.reduce.tasks
=
<
number
>
-
Starting
Job
=
job_201205211843_0002
,Tracking
URL
=
http
://192.168.189.128:50030/jobdetails.jsp?
jobid
=
job_201205211843_0002
-
Kill
Command
=/opt/app/hadoop-0.20.2-cdh3u3/bin/hadoopjob
-Dmapred.job.tracker
=
192
.168.189.128:9020-killjob_201205211843_0002
-
2012-05-2118:45:01,593Stage-1
map
=
0
%,
reduce
=
0
%
-
2012-05-2118:45:04,613Stage-1
map
=
100
%,
reduce
=
0
%
-
2012-05-2118:45:12,695Stage-1
map
=
100
%,
reduce
=
100
%
-
Ended
Job
=
job_201205211843_0002
-
OK
-
136003
-
Timetaken:14.674seconds
-
hive
>
作业成功完成
再来向queue1队列提交作业:
-
>
set
mapred.job.queue.name
=
queue1
;
-
hive
>
selectcount(*)fromt_aa_pc_log;
-
TotalMapReduce
jobs
=
1
-
LaunchingJob1outof1
-
Numberofreducetasksdeterminedatcompiletime:1
-
Inordertochangetheaverageloadforareducer(inbytes):
-
set
hive.exec.reducers.bytes.per.reducer
=
<
number
>
-
Inordertolimitthemaximumnumberofreducers:
-
set
hive.exec.reducers.max
=
<
number
>
-
Inordertosetaconstantnumberofreducers:
-
set
mapred.reduce.tasks
=
<
number
>
-
org.apache.hadoop.ipc.RemoteException:org.apache.hadoop.security.AccessControlException:Userp_sdo_data_01cannotperformoperationSUBMIT_JOBonqueuequeue1.
-
Pleaserun"hadoopqueue-showacls"commandtofindthequeuesyouhaveaccessto.
-
atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:179)
-
atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:136)
-
atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:113)
-
atorg.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3781)
-
atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)
-
atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
-
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
-
atjava.lang.reflect.Method.invoke(Method.java:597)
-
atorg.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
-
atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
-
atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
-
atjava.security.AccessController.doPrivileged(NativeMethod)
-
atjavax.security.auth.Subject.doAs(Subject.java:396)
-
atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
-
atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
作业提交失败!
最后, 可以使用hadoop queue -showacls 命令查看队列信息:
-
[hadoop@localhostconf]$hadoopqueue-showacls
-
Queueaclsforuser:hadoop
-
-
Queue
Operations
-
=====================
-
queue1administer-jobs
-
queue2submit-job,administer-jobs
-
queue3submit-job,administer-jobs
-
queue4submit-job,administer-jobs

