Confluent kafka python ssl

Encrypt with TLS¶

By default, Apache Kafka® communicates in PLAINTEXT , which means that all data is sent in the clear. To encrypt communication, you should configure all the Confluent Platform components in your deployment to use TLS/SSL encryption.

Confluent Platform supports Transport Layer Security (TLS) encryption based on OpenSSL, an open source cryptography toolkit that provides an implementation of the Transport Layer Security (TLS) and Secure Socket Layer (SSL) protocols With TLS authentication, the server authenticates the client (also called “two-way authentication”).

Secure Sockets Layer (SSL) was the predecessor of Transport Layer Security (TLS), and has been deprecated since June 2015. For historical reasons, SSL is used in configuration and code instead of TLS .

You can configure TLS for encryption, but you can also configure TLS for authentication. You can configure just TLS encryption (by default, TLS encryption includes certificate authentication of the server) and independently choose a separate mechanism for client authentication (for example, TLS , SASL ). Technically speaking, TLS encryption already enables one-way authentication in which the client authenticates the server certificate. In this topic, “TLS authentication” refers to the two-way authentication, where the broker also authenticates the client certificate.

Enabling TLS may have a performance impact due to encryption overhead.

TLS uses private-key/certificate pairs, which are used during the TLS handshake process.

  • Each broker needs its own private-key/certificate pair, and the client uses the certificate to authenticate the broker.
  • Each logical client needs a private-key/certificate pair if client authentication is enabled, and the broker uses the certificate to authenticate the client.
Читайте также:  Jquery поменять свойства css

You can configure each broker and logical client with a truststore, which is used to determine which certificates (broker or logical client identities) to trust (authenticate). You can configure the truststore in many ways. Consider the following two examples:

  • The truststore contains one or many certificates: the broker or logical client will trust any certificate listed in the truststore.
  • The truststore contains a Certificate Authority (CA): the broker or logical client will trust any certificate that was signed by the CA in the truststore.

Using the CA method is more convenient, because adding a new broker or client doesn’t require a change to the truststore. The CA method is outlined in this diagram.

However, with the CA method, Kafka does not conveniently support blocking authentication for individual brokers or clients that were previously trusted using this mechanism (certificate revocation is typically done using Certificate Revocation Lists or the Online Certificate Status Protocol), so you would have to rely on authorization to block access.

In contrast, if you use one or many certificates, blocking authentication is achieved by removing the broker or client’s certificate from the truststore.

For an example that shows how to set Docker environment variables for Confluent Platform running in ZooKeeper mode, see the Confluent Platform demo . Refer to the demo’s docker-compose.yml file for a configuration reference.

Create TLS keys and certificates¶

Refer to the Security Tutorial, which describes how to create TLS keys and certificates .

Brokers¶

Configure all brokers in the Kafka cluster to accept secure connections from clients. Any configuration changes made to the broker will require a rolling restart .

Enable security for Kafka brokers as described in the section below. Additionally, if you are using Confluent Control Center, Auto Data Balancer, or Self-Balancing, configure your brokers for:

ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks ssl.truststore.password=test1234 ssl.keystore.location=/var/ssl/private/kafka.server.keystore.jks ssl.keystore.password=test1234 ssl.key.password=test1234
security.inter.broker.protocol=SSL
listeners=SSL://kafka1:9093 advertised.listeners=SSL://:9093
  • TLS/SSL is not enabled for interbroker communication
  • Some clients connecting to the cluster do not use TLS/SSL
listeners=PLAINTEXT://kafka1:9092,SSL://kafka1:9093 advertised.listeners=PLAINTEXT://:9092,SSL://:9093

Note that advertised.host.name and advertised.port configure a single PLAINTEXT port and are incompatible with secure protocols. Use advertised.listeners instead.

Optional settings¶

Here are some optional settings:

ssl.cipher.suites A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using the TLS/SSL network protocol.

  • Type: list
  • Default: null (by default, all supported cipher suites are enabled)
  • Importance: medium

ssl.enabled.protocols The list of protocols enabled for TLS/SSL connections.

ssl.truststore.type The file format of the truststore file.

Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK/JRE. See the JCA Providers Documentation for more information.

Clients¶

The new Producer and Consumer clients support security for Kafka versions 0.9.0 and higher.

If you are using the Kafka Streams API, you can read on how to configure equivalent SSL and SASL parameters.

If client authentication is not required by the broker, the following is a minimal configuration example that you can store in a client properties file client-ssl.properties . Since this stores passwords directly in the client configuration file, it is important to restrict access to these files via file system permissions.

bootstrap.servers=kafka1:9093 security.protocol=SSL ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks ssl.truststore.password=test1234

If client authentication using TLS/SSL is required, the client must provide the keystore as well. You can read about the additional configurations required in TLS/SSL Authentication .

Examples using kafka-console-producer and kafka-console-consumer , passing in the client-ssl.properties file with the properties defined above:

bin/kafka-console-producer --broker-list kafka1:9093 --topic test --producer.config client-ssl.properties bin/kafka-console-consumer --bootstrap-server kafka1:9093 --topic test --consumer.config client-ssl.properties --from-beginning

Optional settings¶

Here are some optional settings:

ssl.provider The name of the security provider used for TLS/SSL connections. Default value is the default security provider of the JVM.

ssl.cipher.suites A cipher suite is a named combination of authentication, encryption, MAC address, and key exchange algorithm used to negotiate the security settings for a network connection using the TLS/SSL network protocol.

  • Type: list
  • Default: null (by default, all supported cipher suites are enabled)
  • Importance: medium

ssl.enabled.protocols The list of protocols enabled for SSL connections. The default is ‘TLSv1.2,TLSv1.3’ when running with Java 11 or newer, and ‘TLSv1.2’ otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it. Otherwise, clients and servers fallback to TLSv1.2 (assuming both support at least TLSv1.2). This default should be fine for most cases.

ssl.truststore.type The file format of the truststore file.

ZooKeeper¶

Starting in Confluent Platform version 5.5.0, the version of ZooKeeper that is bundled with Kafka supports TLS/SSL. For details, refer to Adding security to a running cluster .

Kafka Connect¶

This section describes how to enable security for Kafka Connect. Securing Kafka Connect requires that you configure security for:

  1. Kafka Connect workers: part of the Kafka Connect API, a worker is really just an advanced client, underneath the covers
  2. Kafka Connect connectors: connectors may have embedded producers or consumers, so you must override the default configurations for Connect producers used with source connectors and Connect consumers used with sink connectors
  3. Kafka Connect REST: Kafka Connect exposes a REST API that can be configured to use SSL using additional properties

Configure security for Kafka Connect as described in the section below. Additionally, if you are using Confluent Control Center streams monitoring for Kafka Connect, configure security for:

Configure the top-level settings in the Connect workers to use TLS/SSL by adding these properties in connect-distributed.properties . These top-level settings are used by the Connect worker for group coordination and to read and write to the internal topics which are used to track the cluster’s state (for example, configs and offsets).

bootstrap.servers=kafka1:9093 security.protocol=SSL ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks ssl.truststore.password=test1234

Connect workers manage the producers used by source connectors and the consumers used by sink connectors. So, for the connectors to leverage security, you also have to override the default producer/consumer configuration that the worker uses. Depending on whether the connector is a source or sink connector:

    For source connectors: configure the same properties adding the producer prefix.

producer.bootstrap.servers=kafka1:9093 producer.security.protocol=SSL producer.ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks producer.ssl.truststore.password=test1234
consumer.bootstrap.servers=kafka1:9093 consumer.security.protocol=SSL consumer.ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks consumer.ssl.truststore.password=test1234

Источник

Оцените статью