name value description
hadoop.common.configuration.version 0.23.0 version of this configuration file
hadoop.tmp.dir /tmp/hadoop-${user.name} A base for other temporary directories.
io.native.lib.available true
Controls whether to use native libraries for bz2 and zlib compression codecs
or not. The property does not control any other native libraries.
hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter
A comma separated list of class names. Each class in the list must extend
org.apache.hadoop.http.FilterInitializer. The corresponding Filter will be
initialized. Then, the Filter will be applied to all user facing jsp and servlet
web pages. The ordering of the list defines the ordering of the filters.
hadoop.security.authorization false Is service-level authorization enabled?
hadoop.security.instrumentation.requires.admin false
Indicates if administrator ACLs are required to access instrumentation
servlets (JMX, METRICS, CONF, STACKS).
hadoop.security.authentication simple Possible values are simple (no authentication), and kerberos
hadoop.security.group.mapping org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
Class for user to group mapping (get groups for a given user) for ACL. The
default implementation,
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback, will
determine if the Java Native Interface (JNI) is available. If JNI is available
the implementation will use the API within hadoop to resolve a list of groups
for a user. If JNI is not available then the shell implementation,
ShellBasedUnixGroupsMapping, is used. This implementation shells out to
the Linux/Unix environment with the bash -c groups command to resolve a
list of groups for a user.
hadoop.security.dns.interface
The name of the Network Interface from which the service should determine
its host name for Kerberos login. e.g. eth2. In a multi-homed environment,
the setting can be used to affect the _HOST substitution in the service
Kerberos principal. If this configuration value is not set, the service will use
its default hostname as returned by InetAddress.getLocalHost
().getCanonicalHostName(). Most clusters will not require this setting.
hadoop.security.dns.nameserver
The host name or IP address of the name server (DNS) which a service Node
should use to determine its own host name for Kerberos Login. Requires
hadoop.security.dns.interface. Most clusters will not require this setting.
hadoop.security.dns.log-slow-lookups.enabled false
Time name lookups (via SecurityUtil) and log them if they exceed the
configured threshold.
hadoop.security.dns.log-slow-lookups.threshold.ms 1000
If slow lookup logging is enabled, this threshold is used to decide if a lookup
is considered slow enough to be logged.
hadoop.security.groups.cache.secs 300
This is the config controlling the validity of the entries in the cache
containing the user->group mapping. When this duration has expired, then
the implementation of the group mapping provider is invoked to get the
groups of the user and then cached back.
hadoop.security.groups.negative-cache.secs 30
Expiration time for entries in the the negative user-to-group mapping
caching, in seconds. This is useful when invalid users are retrying frequently.
It is suggested to set a small value for this expiration, since a transient error
in group lookup could temporarily lock out a legitimate user. Set this to zero
or negative value to disable negative user-to-group caching.
hadoop.security.groups.cache.warn.after.ms 5000
If looking up a single user to group takes longer than this amount of
milliseconds, we will log a warning message.
hadoop.security.groups.cache.background.reload false
Whether to reload expired user->group mappings using a background thread
pool. If set to true, a pool of
hadoop.security.groups.cache.background.reload.threads is created to update
the cache in the background.
hadoop.security.groups.cache.background.reload.threads 3
Only relevant if hadoop.security.groups.cache.background.reload is true.
Controls the number of concurrent background user->group cache entry
refreshes. Pending refresh requests beyond this value are queued and
processed when a thread is free.
hadoop.security.groups.shell.command.timeout 0s
Used by the ShellBasedUnixGroupsMapping class, this property controls
how long to wait for the underlying shell command that is run to fetch
groups. Expressed in seconds (e.g. 10s, 1m, etc.), if the running command
takes longer than the value configured, the command is aborted and the
groups resolver would return a result of no groups found. A value of 0s
(default) would mean an infinite wait (i.e. wait until the command exits on its
own).
hadoop.security.group.mapping.ldap.connection.timeout.ms 60000
This property is the connection timeout (in milliseconds) for LDAP
operations. If the LDAP provider doesn't establish a connection within the
specified period, it will abort the connect attempt. Non-positive value means
no LDAP connection timeout is specified in which case it waits for the
connection to establish until the underlying network times out.
hadoop.security.group.mapping.ldap.read.timeout.ms 60000
This property is the read timeout (in milliseconds) for LDAP operations. If
the LDAP provider doesn't get a LDAP response within the specified period,
it will abort the read attempt. Non-positive value means no read timeout is
specified in which case it waits for the response infinitely.
hadoop.security.group.mapping.ldap.url
The URL of the LDAP server to use for resolving user groups when using
the LdapGroupsMapping user to group mapping.
hadoop.security.group.mapping.ldap.ssl false Whether or not to use SSL when connecting to the LDAP server.
hadoop.security.group.mapping.ldap.ssl.keystore
File path to the SSL keystore that contains the SSL certificate required by the
LDAP server.
hadoop.security.group.mapping.ldap.ssl.keystore.password.file
The path to a file containing the password of the LDAP SSL keystore. If the
password is not configured in credential providers and the property
hadoop.security.group.mapping.ldap.ssl.keystore.password is not set,
LDAPGroupsMapping reads password from the file. IMPORTANT: This file
should be readable only by the Unix user running the daemons and should be
a local file.
hadoop.security.group.mapping.ldap.ssl.keystore.password
The password of the LDAP SSL keystore. this property name is used as an
alias to get the password from credential providers. If the password can not
be found and hadoop.security.credential.clear-text-fallback is true
LDAPGroupsMapping uses the value of this property for password.
hadoop.security.credential.clear-text-fallback true
true or false to indicate whether or not to fall back to storing credential
password as clear text. The default value is true. This property only works
when the password can't not be found from credential providers.
hadoop.security.credential.provider.path
A comma-separated list of URLs that indicates the type and location of a list
of providers that should be consulted.
hadoop.security.credstore.java-keystore-provider.password-file
The path to a file containing the custom password for all keystores that may
be configured in the provider path.
hadoop.security.group.mapping.ldap.bind.user
The distinguished name of the user to bind as when connecting to the LDAP
server. This may be left blank if the LDAP server supports anonymous binds.
hadoop.security.group.mapping.ldap.bind.password.file
The path to a file containing the password of the bind user. If the password is
not configured in credential providers and the property
hadoop.security.group.mapping.ldap.bind.password is not set,
LDAPGroupsMapping reads password from the file. IMPORTANT: This file
should be readable only by the Unix user running the daemons and should be
a local file.
hadoop.security.group.mapping.ldap.bind.password
The password of the bind user. this property name is used as an alias to get
the password from credential providers. If the password can not be found and
hadoop.security.credential.clear-text-fallback is true LDAPGroupsMapping
uses the value of this property for password.
1/
2018/3/15file:///F:/Hadoop/hadoop-2.9.0/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml