Integration of Azure AD with Elastic for central User management

Any available guide/custom realm to integrate Azure AD with elastic? As Azure AD doesn't uses traditional ports, It seems it required to use custom realm for API.

Any help will be appreciated.

Thanks

Do you mean using X-Pack?

Yes using Security feature.

https://www.elastic.co/guide/en/x-pack/current/active-directory-realm.html

Hello Warkolm,

Any update on this?

Thanks

You can integrate with Azure AD using the X-Pack SAML realm, if that suits your needs.

1 Like

When Elastic going to integrate Elasticsearch application in Azure Gallery application for Enterprise Applications?

We have no immediate plans to do so, but it may happen in the future.

Where I can define Azure tenant/key/subscription ID for ELK stack to use AZURE Ip? Is their an API definition for Elastic?

Metadata file obtained from Azure created application not been injected. is there an plugin to correct Azure metadata?

Error:

Caused by: org.opensaml.core.xml.io.UnmarshallingException: net.shibboleth.utilities.java.support.xml.XMLParserException: Unable to parse inputstream, it contained invalid XML
    at org.opensaml.saml.metadata.resolver.impl.AbstractMetadataResolver.unmarshallMetadata(AbstractMetadataResolver.java:355) ~[?:?]
    at org.opensaml.saml.metadata.resolver.impl.AbstractReloadingMetadataResolver.unmarshallMetadata(AbstractReloadingMetadataResolver.java:340) ~[?:?]
    at org.opensaml.saml.metadata.resolver.impl.AbstractReloadingMetadataResolver.processNewMetadata(AbstractReloadingMetadataResolver.java:381) ~[?:?]
    at org.opensaml.saml.metadata.resolver.impl.AbstractReloadingMetadataResolver.refresh(AbstractReloadingMetadataResolver.java:291) ~[?:?]
    at org.opensaml.saml.metadata.resolver.impl.AbstractReloadingMetadataResolver.initMetadataResolver(AbstractReloadingMetadataResolver.java:262) ~[?:?]
    at org.opensaml.saml.metadata.resolver.impl.AbstractMetadataResolver.doInitialize(AbstractMetadataResolver.java:287) ~[?:?]
    at net.shibboleth.utilities.java.support.component.AbstractInitializableComponent.initialize(AbstractInitializableComponent.java:61) ~[?:?]
    at org.elasticsearch.xpack.security.authc.saml.SamlRealm.lambda$initialiseResolver$10(SamlRealm.java:565) ~[?:?]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
    at org.elasticsearch.xpack.security.authc.saml.SamlRealm.initialiseResolver(SamlRealm.java:564) ~[?:?]
    at org.elasticsearch.xpack.security.authc.saml.SamlRealm.parseFileSystemMetadata(SamlRealm.java:529) ~[?:?]
    at org.elasticsearch.xpack.security.authc.saml.SamlRealm.initializeResolver(SamlRealm.java:455) ~[?:?]
    at org.elasticsearch.xpack.security.authc.saml.SamlRealm.create(SamlRealm.java:192) ~[?:?]
    at org.elasticsearch.xpack.security.authc.InternalRealms.lambda$getFactories$5(InternalRealms.java:119) ~[?:?]
    at org.elasticsearch.xpack.security.authc.Realms.initRealms(Realms.java:189) ~[?:?]
    at org.elasticsearch.xpack.security.authc.Realms.<init>(Realms.java:75) ~[?:?]
    at org.elasticsearch.xpack.security.Security.createComponents(Security.java:446) ~[?:?]
    at org.elasticsearch.xpack.security.Security.createComponents(Security.java:377) ~[?:?]
    at org.elasticsearch.node.Node.lambda$new$7(Node.java:397) ~[elasticsearch-6.2.0.jar:6.2.0]
    at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) ~[?:1.8.0_112]
    at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374) ~[?:1.8.0_112]
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_112]
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_112]
    at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[?:1.8.0_112]
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:1.8.0_112]
    at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[?:1.8.0_112]
    at org.elasticsearch.node.Node.<init>(Node.java:400) ~[elasticsearch-6.2.0.jar:6.2.0]
    at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.0.jar:6.2.0]
    at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.0.jar:6.2.0]
    at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.0.jar:6.2.0]

You shouldn't need any special plugin. Something is wrong with that metadata file - we can't even parse the XML.

If you want to walk us through the steps you're trying we'll be happy to assist with getting this working, but I can't do much with random error messages and no context.

That will be awesome. Please schedule something around EST hours - anyday.

Thanks
Jay

I think Tim means you need to provide mode details here.

Sorry about confusion.

I'm fairly new to ELK stack. Please let me know which debug mode to use here. Also, It seems like XML file needed to converted to Unix format. Once converted, Elasticsearch is up and running but login is still lock-out. Also receiving below warning:

	[2018-03-28T14:06:24,451][INFO ][o.e.c.m.MetaDataMappingService] [glogger-1] [.watcher-history-7-2018.03.28/VTQWlhsOQy6ehtNZwsGQEg] update_mapping [doc]
[2018-03-28T14:06:24,491][INFO ][o.e.c.m.MetaDataMappingService] [glogger-1] [.watcher-history-7-2018.03.28/VTQWlhsOQy6ehtNZwsGQEg] update_mapping [doc]
[2018-03-28T14:06:24,965][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [glogger-1] caught exception while handling client http traffic, closing connection [id: 0xa4f6fcee, L:0.0.0.0/0.0.0.0:9200 ! R:/127.0.0.1:56572]
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206c6f63616c686f73743a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e310d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706c6247467a64476c6a0d0a0d0a
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206c6f63616c686f73743a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e310d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a417574686f72697a6174696f6e3a204261736963205a57786863335270597a706c6247467a64476c6a0d0a0d0a
    at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106) ~[?:?]
    at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[?:?]
    at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[?:?]
    at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[?:?]
    ... 15 more

This means that something ( kibana ? your browser? it's hard to know without further context ) is attempting to send plaintext traffic on a TLS connection to Elasticsearch.

Let us take a step back though. Can you please make sure that you read the saml guide and follow the relevant step by step instructions there. Once you've done that, then it would be much easier to fill the gaps with things that are missed and identify specific issues with specific parts of the configuration.

This is a hex encoding of the HTTP headers that Elasticsearch received on the HTTPS port.
Decoding that gives:

HEAD / HTTP/1.1
Connection: Keep-Alive
Content-Type: application/json
Host: localhost:9200
User-Agent: Manticore 0.6.1

Since it's running Manticore 0.6.1, my guess is that it's coming from Logstash.
Do you have a logstash instance that's trying to connect to Elasticsearch over http?

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.