kafka2.x版本配置SSL进行加密和身份验证

背景:找了一圈资料,都是东讲讲西讲讲,最后我还没搞好,最终决定参考官网说明。

官网指导手册地址:Apache Kafka

需要预备的知识,keytool和openssl

关于keytool的参考:keytool的使用-CSDN博客

关于openssl的参考:openssl常用命令大全_openssl命令参数大全-CSDN博客

先只看SSL安全机制方式。

Apache Kafka 允许客户端通过 SSL 进行连接。默认情况下,SSL 处于禁用状态,但可以根据需要打开。

  1. 1为每个 Kafka 代理生成 SSL 密钥和证书

部署一个或多个支持 SSL 的代理的第一步是为集群中的每台计算机生成密钥和证书。您可以使用 Java 的 keytool 实用程序来完成此任务。我们最初会将密钥生成到临时密钥库中,以便稍后使用 CA 导出和签名。

keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA

您需要在上面的命令中指定两个参数:

  1. 密钥库:存储证书的密钥库文件。密钥库文件包含证书的私钥;因此,它需要安全保存。这里是server.keystore.jks
  2. 有效期:证书的有效时间,单位为天。这里是700天。

可以看到,目录下生成了对应文件

之后可以运行以下命令来验证生成的证书的内容:

keytool -list -v -keystore server.keystore.jks

  1. 2创建您自己的 CA

完成第一步后,群集中的每台计算机都有一个公钥-私钥对,以及一个用于标识计算机的证书。但是,该证书是未签名的,这意味着攻击者可以创建此类证书来伪装成任何计算机。

因此,通过为群集中的每台计算机对证书进行签名来防止伪造证书非常重要。证书颁发机构 (CA) 负责对证书进行签名。CA的工作方式类似于签发护照的政府——政府在每本护照上盖章(签名),使护照变得难以伪造。其他政府会验证印章以确保护照的真实性。同样,CA 对证书进行签名,而加密技术保证签名证书在计算上难以伪造。因此,只要 CA 是真实且受信任的颁发机构,客户端就可以高度保证它们连接到真实的计算机。

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365

生成的 CA 只是一个公钥-私钥对和证书,它旨在对其他证书进行签名。
下一步是将生成的 CA 添加到客户端的信任库中,以便客户端可以信任此 CA:

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

与步骤 1 中存储每台机器自己的身份的密钥库不同,客户机的信任库存储客户机应信任的所有证书。将证书导入到信任库中还意味着信任由该证书签名的所有证书。如上所述,信任政府 (CA) 也意味着信任它签发的所有护照(证书)。此属性称为信任链,在大型 Kafka 集群上部署 SSL 时特别有用。您可以使用单个 CA 对集群中的所有证书进行签名,并让所有计算机共享信任该 CA 的同一信任库。这样,所有计算机都可以对所有其他计算机进行身份验证。

  1. 3对证书进行签名

下一步是使用步骤 2 中生成的 CA 对步骤 1 生成的所有证书进行签名。首先,您需要从密钥库中导出证书:

keytool -密钥库 client.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file

然后与 CA 一起签名:

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:{ca-password}

最后,您需要将 CA 的证书和签名的证书都导入到密钥库中:

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

参数的定义如下:

  1. 密钥库:密钥库的位置
  2. ca-cert:CA的证书
  3. ca-key:CA的私钥
  4. ca-password:CA的密码
  5. cert-file:导出的服务器未签名证书
  6. cert-signed:服务器的签名证书

  1. 4配置 Kafka 代理

Kafka Broker 支持侦听多个端口上的连接。我们需要在 server.properties 中配置以下属性,该属性必须具有一个或多个逗号分隔值:

如果未为代理间通信启用 SSL(请参阅下文了解如何启用它),则需要 PLAINTEXT 和 SSL 端口。

listeners=PLAINTEXT://localhost:9092,SSL://localhost:9092

代理端需要以下 SSL 配置

ssl.keystore.location=/home/lighthouse/server.keystore.jks

ssl.keystore.password=test1234

ssl.key.password=test1234

ssl.truststore.location=/home/lighthouse/server.truststore.jks

ssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。值得考虑的可选设置:

  1. ssl.client.auth=none(“required” => 需要客户端身份验证,“requested” =>请求客户端身份验证,没有证书的客户端仍然可以连接。不建议使用“requested”,因为它提供了错误的安全感,并且配置错误的客户端仍将成功连接。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。(默认值为空列表)
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (列出要从客户端接受的 SSL 协议。请注意,SSL 已被弃用,取而代之的是 TLS,不建议在生产中使用 SSL)
  4. ssl.keystore.type=JKS
  5. ssl.truststore.type=JKS
  6. ssl.secure.random.implementation=SHA1PRNG

如果要为代理之间的通信启用 SSL,请将以下内容添加到 server.properties 文件(默认为 PLAINTEXT)

security.inter.broker.protocol=SSL

  1. 5配置 Kafka 客户端

SSL 仅支持新的 Kafka 生产者和使用者,不支持较旧的 API。对于生产者和使用者,SSL 的配置是相同的。
如果代理中不需要客户机认证,那么下面是一个最小配置示例:

security.protocol=SSL协议

ssl.truststore.location=/var/private/ssl/client.truststore.jks

ssl.truststore.password=测试1234

注意:ssl.truststore.password 在技术上是可选的,但强烈建议使用。如果未设置密码,则对信任库的访问仍然可用,但完整性检查将被禁用。如果需要客户机认证,那么必须像步骤 1 中一样创建密钥库,并且还必须配置以下内容:

ssl.keystore.location=/var/private/ssl/client.keystore.jks

ssl.keystore.password=test1234

ssl.key.password=test1234

根据我们的要求和代理配置,可能还需要其他配置设置:

  1. ssl.provider(可选)。用于 SSL 连接的安全提供程序的名称。缺省值是 JVM 的缺省安全提供程序。
  2. ssl.cipher.suites(可选)。密码套件是身份验证、加密、MAC 和密钥交换算法的命名组合,用于协商使用 TLS 或 SSL 网络协议的网络连接的安全设置。
  3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1。它应该列出至少一个在代理端配置的协议
  4. ssl.truststore.type=JKS
  5. ssl.keystore.type=JKS

生产者和消费者共同使用到的client-ssl.properties文件内容如下:

使用 console-producer 和 console-consumer 的示例:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config ./config/client-ssl.properties

报错了:

还要在用户目录下执行如下命令,信任客户端:

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert

如果密码错了,还会报如下错误:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:431)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:299)
        at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:44)
        at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
        at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
        at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
        at org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:439)
        at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:420)
        ... 3 more
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:144)
        at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
        ... 8 more
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/lighthouse/client.truststore.jks of type JKS
        at org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:357)
        at org.apache.kafka.common.security.ssl.SslFactory.createSSLContext(SslFactory.java:248)
        at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:141)
        ... 9 more
Caused by: java.io.IOException: keystore password was incorrect
        at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2092)
        at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:243)
        at java.base/java.security.KeyStore.load(KeyStore.java:1479)
        at org.apache.kafka.common.security.ssl.SslFactory$SecurityStore.load(SslFactory.java:354)
        ... 11 more
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
        ... 15 more

然后我核对了client-ssl.properties文件中的配置(包含密码),再次启动producer,会报如下错:

lighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config ./config/client-ssl.properties
>[2024-03-19 13:42:49,783] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,835] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:49,937] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,140] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:50,543] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:51,298] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:52,203] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:53,158] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:54,264] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:55,220] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 13:42:56,376] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C^Clighthouse@VM-8-10-ubuntu:~/kafkaWithZk/kafka_2.12-2.2.1$

核对了server.properties文件的密码后,启动kafka还是报错,报的错关键信息如下:

[2024-03-19 14:34:31,955] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 14:34:31,957] WARN SSL handshake failed (kafka.utils.CoreUtils$)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching localhost found
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:360)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:303)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:298)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:654)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:473)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:369)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1076)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1063)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1010)
        at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:402)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:484)
        at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:340)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:265)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:170)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
        at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
        at kafka.server.KafkaServer.doControlledShutdown$1(KafkaServer.scala:510)
        at kafka.server.KafkaServer.controlledShutdown(KafkaServer.scala:563)
        at kafka.server.KafkaServer.$anonfun$shutdown$2(KafkaServer.scala:585)
        at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)
        at kafka.server.KafkaServer.shutdown(KafkaServer.scala:585)
        at kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:48)
        at kafka.Kafka$$anon$1.run(Kafka.scala:72)
Caused by: java.security.cert.CertificateException: No name matching localhost found
        at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
        at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:461)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:435)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
        at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:632)
        ... 24 more
[2024-03-19 14:34:31,957] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:31,960] INFO [/config/changes-event-process-thread]: Shutting down (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Shutdown completed (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,961] INFO [/config/changes-event-process-thread]: Stopped (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-19 14:34:31,962] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,979] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2024-03-19 14:34:31,980] INFO [data-plane Kafka Request Handler on Broker 0], shutting down (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,988] INFO [data-plane Kafka Request Handler on Broker 0], shut down completely (kafka.server.KafkaRequestHandlerPool)
[2024-03-19 14:34:31,995] INFO [KafkaApi-0] Shutdown complete. (kafka.server.KafkaApis)
[2024-03-19 14:34:31,997] INFO [ExpirationReaper-0-topic]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,059] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
^C[2024-03-19 14:34:32,114] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,132] INFO [ExpirationReaper-0-topic]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,134] INFO [TransactionCoordinator id=0] Shutting down. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,135] INFO [ProducerId Manager 0]: Shutdown complete: last producerId assigned 1000 (kafka.coordinator.transaction.ProducerIdManager)
[2024-03-19 14:34:32,136] INFO [Transaction State Manager 0]: Shutdown complete (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-19 14:34:32,136] INFO [Transaction Marker Channel Manager 0]: Shutting down (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,139] INFO [Transaction Marker Channel Manager 0]: Stopped (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,140] INFO [Transaction Marker Channel Manager 0]: Shutdown completed (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-19 14:34:32,141] INFO [TransactionCoordinator id=0] Shutdown complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-19 14:34:32,141] INFO [GroupCoordinator 0]: Shutting down. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,144] INFO [ExpirationReaper-0-Heartbeat]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,160] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,261] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Heartbeat]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,344] INFO [ExpirationReaper-0-Rebalance]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] INFO [ExpirationReaper-0-Rebalance]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,362] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,363] INFO [ExpirationReaper-0-Rebalance]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,363] INFO [GroupCoordinator 0]: Shutdown complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-19 14:34:32,364] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,364] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,366] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-19 14:34:32,368] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,369] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2024-03-19 14:34:32,370] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,463] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,492] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,564] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,666] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,674] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,692] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,693] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,768] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,870] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,893] INFO [ExpirationReaper-0-ElectPreferredLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-19 14:34:32,897] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager)
[2024-03-19 14:34:32,898] INFO Shutting down. (kafka.log.LogManager)
[2024-03-19 14:34:32,934] INFO Shutdown complete. (kafka.log.LogManager)
[2024-03-19 14:34:32,960] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,964] INFO Session: 0x100cd6124170002 closed (org.apache.zookeeper.ZooKeeper)
[2024-03-19 14:34:32,966] INFO EventThread shut down for session: 0x100cd6124170002 (org.apache.zookeeper.ClientCnxn)
[2024-03-19 14:34:32,966] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2024-03-19 14:34:32,968] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,168] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:33,170] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
^C[2024-03-19 14:34:33,740] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
^C[2024-03-19 14:34:33,972] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,170] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-19 14:34:34,172] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
^C[2024-03-19 14:34:34,204] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2024-03-19 14:34:34,204] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-03-19 14:34:34,206] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)

经过一番清理ca-*,cert-*,client-*,server-*文件后,然后重新生成秘钥证书和CA、签名,步骤如下:

一、生成 SSL 密钥和证书
keytool -keystore server.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore server.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.keystore.jks -alias localhost -validity 700 -genkey -keyalg RSA
keytool -keystore client.truststore.jks -alias localhost -validity 700 -genkey -keyalg RSA

2、创建我自己的CA
openssl req -new -x509 -keyout ca-key -out ca-cert -days 700
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 700 -CAcreateserial -passin pass:123456

3、对证书进行签名
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.keystore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

再次启动zookeeper和kafka,然后执行生产producer命令,发现还是报错:

[2024-03-19 17:52:38,773] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,876] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,876] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:38,979] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:38,980] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed
 (org.apache.kafka.clients.NetworkClient)
[2024-03-19 17:52:38,980] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] INFO [Controller id=0, targetBrokerId=0] Failed authentication with localhost/127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.netw
ork.Selector)
[2024-03-19 17:52:39,083] INFO [SocketServer brokerId=0] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2024-03-19 17:52:39,083] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed
 (org.apache.kafka.clients.NetworkClient)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.kler.cn/a/274653.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

MacOS Xcode 使用LLDB调试Qt的 QString

环境&#xff1a; MacOS&#xff1a; 14.3Xcode&#xff1a; Version 15.0Qt&#xff1a;Qt 6.5.3 前言 Xcode 中显示 预览 QString 特别不方便, 而Qt官方的 lldb 脚本debugger/lldbbridge.py一直加载失败&#xff0c;其他第三方的脚本都 不兼容当前的 环境。所以自己研究写…

使用华为云HECS服务器+nodejs开启web服务

简介: 在华为云HECS服务器上使用nodejs开启一个web服务。 目录 1.开通华为云服务器 2.远程登录 2.1 使用华为官方的网页工具登录 ​编辑 2.2 使用MobaXterm登录 3 安装node 3.1 下载 2. 配置环境变量 4. 安装express模块 5.开启外网访问 1.开通华为云服务器 这…

Flutter-底部弹出框(Widget层级)

需求 支持底部弹出对话框。支持手势滑动关闭。支持在widget中嵌入引用。支持底部弹出框弹出后不影响其他操作。支持弹出框中内容固定头部和下面列表时&#xff0c;支持触摸头部并在列表不在头部的时候支持滑动关闭 简述 通过上面的需求可知&#xff0c;就是在界面中可以支持…

20240319在WIN10下给K6000按照驱动程序

20240319在WIN10下给K6000按照驱动程序 2024/3/19 18:12 http://nvidia.cn/ Skip to main content 驱动程序下载 NVIDIA>驱动下载> NVIDIA RTX / Quadro Desktop and Notebook Driver Release 470 Registration Keynote GTC ˆ›šš‰ˆš Portal Prelude DOCA Early Ac…

MySQL 搭建双主复制服务 并 通过 HAProxy 负载均衡

一、MySQL 搭建双主复制高可用服务 在数据库管理中&#xff0c;数据的备份和同步是至关重要的环节&#xff0c;而双主复制&#xff08;Dual Master Replication&#xff09;作为一种高可用性和数据同步的解决方案&#xff0c;通过让两个数据库实例同时充当主服务器和从服务器&…

Java 设计模式系列:行为型-状态模式

简介 状态模式&#xff08;State Pattern&#xff09;是一种行为型设计模式&#xff0c;允许一个对象在其内部状态改变时改变其行为。状态模式中类的行为是由状态决定的&#xff0c;在不同的状态下有不同的行为。 状态模式主要解决的是当控制一个对象状态的条件表达式过于复杂…

智能合约语言(eDSL)—— 使用rust实现eDSL的原理

为理解rust变成eDSL的实现原理&#xff0c;我们需要简单了解元编程与宏的概念,元编程被描述成一种计算机程序可以将代码看待成数据的能力&#xff0c;使用元编程技术编写的程序能够像普通程序在运行时更新、替换变量那样操作更新、替换代码。宏在 Rust 语言中是一种功能&#x…

鸿蒙开发实战:【系统服务管理部件】

简介 samgr组件是OpenHarmony的核心组件&#xff0c;提供OpenHarmony系统服务启动、注册、查询等功能。 系统架构 图 1 系统服务管理系统架构图 目录 /foundation/systemabilitymgr ├── samgr │ ├── bundle.json # 部件描述及编译文件 │ ├── frameworks …

多特征变量序列预测(11) 基于Pytorch的TCN-GRU预测模型

往期精彩内容&#xff1a; 时序预测&#xff1a;LSTM、ARIMA、Holt-Winters、SARIMA模型的分析与比较-CSDN博客 风速预测&#xff08;一&#xff09;数据集介绍和预处理-CSDN博客 风速预测&#xff08;二&#xff09;基于Pytorch的EMD-LSTM模型-CSDN博客 风速预测&#xff…

基于springboot+vue的智慧生活商城系统

博主主页&#xff1a;猫头鹰源码 博主简介&#xff1a;Java领域优质创作者、CSDN博客专家、阿里云专家博主、公司架构师、全网粉丝5万、专注Java技术领域和毕业设计项目实战&#xff0c;欢迎高校老师\讲师\同行交流合作 ​主要内容&#xff1a;毕业设计(Javaweb项目|小程序|Pyt…

Stability AI 3D:开创3D视觉技术新篇章,提升多视角连贯性与生成质量

每周跟踪AI热点新闻动向和震撼发展 想要探索生成式人工智能的前沿进展吗&#xff1f;订阅我们的简报&#xff0c;深入解析最新的技术突破、实际应用案例和未来的趋势。与全球数同行一同&#xff0c;从行业内部的深度分析和实用指南中受益。不要错过这个机会&#xff0c;成为AI领…

杰发科技AC7801——Flash数据读取

0. 简介 因为需要对Flash做CRC校验&#xff0c;第一步先把flash数据读出来。 1. 代码 代码如下所示 #include "ac780x_eflash.h" #include "string.h" #define TestSize 1024 ///< 4K #define TestAddressStart 0x08000000 uint8_t Data[7000]; int…

【NLP笔记】Transformer

文章目录 基本架构EmbeddingEncoderself-attentionMulti-Attention残差连接LayerNorm DecoderMask&Cross Attention线性层&softmax损失函数 论文链接&#xff1a; Attention Is All You Need 参考文章&#xff1a; 【NLP】《Attention Is All You Need》的阅读笔记 一…

多数据源 - dynamic-datasource | 集成 HikariCP 连接池

文章目录 连接池集成简介HikariCP 连接池默认 HikariCP 配置自定义 HikariCP 配置Druid 连接池BeeCp 连接池DBCP2 连接池JNDI 数据源🗯️ 上节回顾:上一节中,实现了 dynamic-datasource 的快速入门。 👉 本节目标:在上一节的基础上,集成 HikariCP 数据库连接池并介绍原…

es 集群安全认证

参考文档&#xff1a;Configure security for the Elastic Stack | Elasticsearch Guide [7.17] | Elastic ES敏感信息泄露的原因 Elasticsearch在默认安装后&#xff0c;不提供任何形式的安全防护不合理的配置导致公网可以访问ES集群。比如在elasticsearch.yml文件中,server…

【SpringSecurity】十三、基于Session实现授权认证

文章目录 1、基于session的认证2、Demosession实现认证session实现授权 1、基于session的认证 流程&#xff1a; 用户认证成功后&#xff0c;服务端生成用户数据保存在session中服务端返回给客户端session id (sid&#xff09;&#xff0c;被客户端存到自己的cookie中客户端下…

C# 使用OpenCvSharp4将Bitmap合成为MP4视频的环境

环境安装步骤&#xff1a; 在VS中选中项目或者解决方案&#xff0c;鼠标右键&#xff0c;选择“管理Nuget包”&#xff0c;在浏览窗口中搜索OpenCVSharp4 1.搜索OpenCvSharp4,选择4.8.0版本&#xff0c;点击安装 2.搜索OpenCvSharp4.runtime.win,选择4.8.0版本&#xff0c;点…

O2OA红头文件流转与O2OA版式公文编辑器基本使用

O2OA开发平台在流程管理中&#xff0c;提供了符合国家党政机关公文格式标准&#xff08;GB/T 9704—2012&#xff09;的公文编辑组件&#xff0c;可以让用户在包含公文管理的项目实施过程中&#xff0c;轻松地实现标准化公文格式的在线编辑、痕迹保留、手写签批等功能。并且可以…

vue-router(v4.0) 基础3

编程式导航 除了使用 <router-link> 创建 a 标签来定义导航链接&#xff0c;我们还可以借助 router 的实例方法&#xff0c;通过编写代码来实现。导航到不同的位置 示例该方法的参数可以是一个字符串路径&#xff0c;或者一个描述地址的对象。例如&#xff1a; // 字符串…

Panasonic松下PLC如何数据采集?如何实现快速接入IIOT云平台?

在工业自动化领域&#xff0c;数据采集与远程控制是提升生产效率、优化资源配置的关键环节。对于使用Panasonic松下PLC的用户来说&#xff0c;如何实现高效、稳定的数据采集&#xff0c;并快速接入IIOT云平台&#xff0c;是摆在他们面前的重要课题。HiWoo Box工业物联网关以其强…
最新文章