Proper study guides for Leading Confluent Confluent Certified Developer for Apache Kafka Certification Examination certified begins with Confluent CCDAK preparation products which designed to deliver the Printable CCDAK questions by making you pass the CCDAK test at your first time. Try the free CCDAK demo right now.
Also have CCDAK free dumps questions for you:
NEW QUESTION 1
If a topic has a replication factor of 3...
- A. 3 replicas of the same data will live on 1 broker
- B. Each partition will live on 4 different brokers
- C. Each partition will live on 2 different brokers
- D. Each partition will live on 3 different brokers
Answer: D
Explanation:
Replicas are spread across available brokers, and each replica = one broker. RF 3 = 3 brokers
NEW QUESTION 2
To enhance compression, I can increase the chances of batching by using
- A. acks=all
- B. linger.ms=20
- C. batch.size=65536
- D. max.message.size=10MB
Answer: B
Explanation:
linger.ms forces the producer to wait before sending messages, hence increasing the chance of creating batches that can be heavily compressed.
NEW QUESTION 3
What is the disadvantage of request/response communication?
- A. Scalability
- B. Reliability
- C. Coupling
- D. Cost
Answer: C
Explanation:
Point-to-point (request-response) style will couple client to the server.
NEW QUESTION 4
To continuously export data from Kafka into a target database, I should use
- A. Kafka Producer
- B. Kafka Streams
- C. Kafka Connect Sink
- D. Kafka Connect Source
Answer: C
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.
NEW QUESTION 5
Your streams application is reading from an input topic that has 5 partitions. You run 5 instances of your application, each with num.streams.threads set to 5. How many stream tasks will be created and how many will be active?
- A. 5 created, 1 active
- B. 5 created, 5 active
- C. 25 created, 25 active
- D. 25 created, 5 active
Answer: D
Explanation:
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created
NEW QUESTION 6
In Kafka, every broker... (select three)
- A. contains all the topics and all the partitions
- B. knows all the metadata for all topics and partitions
- C. is a controller
- D. knows the metadata for the topics and partitions it has on its disk
- E. is a bootstrap broker
- F. contains only a subset of the topics and the partitions
Answer: BEF
Explanation:
Kafka topics are divided into partitions and spread across brokers. Each brokers knows about all the metadata and each broker is a bootstrap broker, but only one of them is elected controller
NEW QUESTION 7
How will you read all the messages from a topic in your KSQL query?
- A. KSQL reads from the beginning of a topic, by default.
- B. KSQL reads from the end of a topi
- C. This cannot be changed.
- D. Use KSQL CLI to set auto.offset.reset property to earliest
Answer: C
Explanation:
Consumers can set auto.offset.reset property to earliest to start consuming from beginning. For KSQL, SET 'auto.offset.reset'='earliest';
NEW QUESTION 8
How often is log compaction evaluated?
- A. Every time a new partition is created
- B. Every time a segment is closed
- C. Every time a message is sent to Kafka
- D. Every time a message is flushed to disk
Answer: B
Explanation:
Log compaction is evaluated every time a segment is closed. It will be triggered if enough data is "dirty" (see dirty ratio config)
NEW QUESTION 9
How much should be the heap size of a broker in a production setup on a machine with 256 GB of RAM, in PLAINTEXT mode?
- A. 4 GB
- B. 128 GB
- C. 16 GB
- D. 512 MB
Answer: A
Explanation:
In Kafka, a small heap size is needed, while the rest of the RAM goes automatically to the page cache (managed by the OS). The heap size goes slightly up if you need to enable SSL
NEW QUESTION 10
I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?
- A. The Confluent Schema Registry
- B. The Kafka Broker
- C. The Kafka Producer itself
- D. Zookeeper
Answer: A
Explanation:
The Confluent Schema Registry is your safeguard against incompatible schema changes and will be the component that ensures no breaking schema evolution will be possible. Kafka Brokers do not look at your payload and your payload schema, and therefore will not reject data
NEW QUESTION 11
What is the protocol used by Kafka clients to securely connect to the Confluent REST Proxy?
- A. Kerberos
- B. SASL
- C. HTTPS (SSL/TLS)
- D. HTTP
Answer: C
Explanation:
TLS - but it is still called SSL.
NEW QUESTION 12
You are using JDBC source connector to copy data from 3 tables to three Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?
- A. 2
- B. 1
- C. 3
- D. 6
Answer: A
Explanation:
here, we have three tables, but the max.tasks is 2, so that's the maximum number of tasks that will be created
NEW QUESTION 13
To transform data from a Kafka topic to another one, I should use
- A. Kafka Connect Sink
- B. Kafka Connect Source
- C. Consumer + Producer
- D. Kafka Streams
Answer: D
Explanation:
Kafka Streams is a library for building streaming applications, specifically applications that transform input Kafka topics into output Kafka topics
NEW QUESTION 14
What information isn't stored inside of Zookeeper? (select two)
- A. Schema Registry schemas
- B. Consumer offset
- C. ACL inforomation
- D. Controller registration
- E. Broker registration info
Answer: B
Explanation:
Consumer offsets are stored in a Kafka topic consumer_offsets, and the Schema Registry stored schemas in the _schemas topic.
NEW QUESTION 15
Which of the following Kafka Streams operators are stateful? (select all that apply)
- A. flatmap
- B. reduce
- C. joining
- D. count
- E. peek
- F. aggregate
Answer: BCDF
Explanation:
Seehttps://kafka.apache.org/20/documentation/streams/developer-guide/dsl- api.html#stateful-transformations
NEW QUESTION 16
When is the onCompletion() method called?
private class ProducerCallback implements Callback {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) { if (e != null) {
- A. e.printStackTrace();}}}ProducerRecord<String, String> record =new ProducerRecord<>("topic1", "key1", "value1"); producer.send(record, new ProducerCallback());
- B. When the message is partitioned and batched successfully
- C. When message is serialized successfully
- D. When the broker response is received
- E. When send() method is called
Answer: C
Explanation:
Callback is invoked when a broker response is received.
NEW QUESTION 17
What data format isn't natively available with the Confluent REST Proxy?
- A. avro
- B. binary
- C. protobuf
- D. json
Answer: C
Explanation:
Protocol buffers isn't a natively supported type for the Confluent REST Proxy, but you may use the binary format instead
NEW QUESTION 18
......
100% Valid and Newest Version CCDAK Questions & Answers shared by Surepassexam, Get Full Dumps HERE: https://www.surepassexam.com/CCDAK-exam-dumps.html (New 150 Q&As)