Confluent Certified Developer for Apache Kafka (CCDAK) Exam Real CCDAK Dumps Questions [2022]

Being a Confluent Certified Developer for Apache Kafka (CCDAK) certified requires to pass CCDAK exam successfully, but how? DumpsBase has written and verified the CCDAK dumps to provided you with real exam questions and answers. Real CCDAK dumps questions of DumpsBase are provided in two formats:

  1. CCDAK dumps can be read in pdf file, which can be instant downloaded. It is easy to read and use.
  2. CCDAK dumps will be tested in real exam mode with DumpsBase software. The software will be shared for free via mail.

At DumpsBase, you can pass your Confluent Certified Developer for Apache Kafka (CCDAK) certification exam with real CCDAK dumps questions in the first attempt.

CCDAK demo questions are online for checking the high-quality

1. Where are the dynamic configurations for a topic stored?

2. What happens when broker.rack configuration is provided in broker configuration in Kafka cluster?

3. What is the disadvantage of request/response communication?

4. is KSQL ANSI SQL compliant?

5. When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields .

How will you fix the error?

6. There are 3 brokers in the cluster. You want to create a topic with a single partition that is resilient to one broker failure and one broker maintenance .

What is the replication factor will you specify while creating the topic?

7. Two consumers share the same group.id (consumer group id). Each consumer will

8. A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before.

Where will the consumer read from?

9. A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2 .

How many brokers can go down before a producer with acks=all can't produce?

10. Where are KSQL-related data and metadata stored?

11. You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3 .

How many tasks will you configure for the S3 connector?

12. To enhance compression, I can increase the chances of batching by using

13. How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application?

14. StreamsBuilder builder = new StreamsBuilder();

KStream<String, String> textLines = builder.stream("word-count-input");

KTable<String, Long> wordCounts = textLines

.mapValues(textLine -> textLine.toLowerCase())

.flatMapValues(textLine -> Arrays.asList(textLine.split("W+")))

.selectKey((key, word) -> word)

.groupByKey()

.count(Materialized.as("Counts"));

wordCounts.toStream().to("word-count-output", Produced.with(Serdes.String(), Serdes.Long()));

builder.build();

What is an adequate topic configuration for the topic word-count-output?

15. Where are the ACLs stored in a Kafka cluster by default?

16. What kind of delivery guarantee this consumer offers?

while (true) {

ConsumerRecords<String, String> records = consumer.poll(100);

try {

consumer.commitSync();

} catch (CommitFailedException e) { log.error("commit failed", e)

}

for (ConsumerRecord<String, String> record records)

{

System.out.printf("topic = %s, partition = %s, offset = %d, customer = %s, country = %s

",

record.topic(), record.partition(), record.offset(), record.key(), record.value());

}

}

17. The exactly once guarantee in the Kafka Streams is for which flow of data?

18. You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers .

How many tasks are launched?

19. You want to perform table lookups against a KTable everytime a new record is received from the KStream .

What is the output of KStream-KTable join?

20. You are doing complex calculations using a machine learning framework on records fetched from a Kafka topic. It takes more about 6 minutes to process a record batch, and he consumer enters rebalances even though it's still running .

How can you improve this scenario?

21. Which actions will trigger partition rebalance for a consumer group? (select three)

22. Which of the following setting increases the chance of batching for a Kafka Producer?

23. What data format isn't natively available with the Confluent REST Proxy?

24. You are using JDBC source connector to copy data from 2 tables to two Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers .

How many tasks are launched?

25. To import data from external databases, I should use

26. What is a generic unique id that I can use for messages I receive from a consumer?

27. What happens if you write the following code in your producer?

producer.send(producerRecord).get()

28. Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact .

What is true about log compaction?

29. Which of these joins does not require input topics to be sharing the same number of partitions?

30. How often is log compaction evaluated?

31. A Zookeeper ensemble contains 3 servers.

Over which ports the members of the ensemble should be able to communicate in default configuration? (select three)

32. A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets an exception Not Leader For Partition Exception in the response .

How does client handle this situation?

33. If I supply the setting compression.type=snappy to my producer, what will happen? (select two)

34. If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what will happen?

35. How will you read all the messages from a topic in your KSQL query?

36. The kafka-console-consumer CLI, when used with the default options

37. A producer is sending messages with null key to a topic with 6 partitions using the DefaultPartitioner. Where will the messages be stored?

38. Which of the following Kafka Streams operators are stateless? (select all that apply)

39. Suppose you have 6 brokers and you decide to create a topic with 10 partitions and a replication factor of 3. The brokers 0 and 1 are on rack A, the brokers 2 and 3 are on rack B, and the brokers 4 and 5 are on rack C. If the leader for partition 0 is on broker 4, and the first replica is on broker 2, which broker can host the last replica? (select two)

40. Your topic is log compacted and you are sending a message with the key K and value null .

What will happen?

41. A kafka topic has a replication factor of 3 and min.insync.replicas setting of 1 .

What is the maximum number of brokers that can be down so that a producer with acks=all can still produce to the topic?

42. You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk .

What will happen if the broker is restarted?

43. A consumer application is using KafkaAvroDeserializer to deserialize Avro messages .

What happens if message schema is not present in AvroDeserializer local cache?

44. In the Kafka consumer metrics it is observed that fetch-rate is very high and each fetch is small .

What steps will you take to increase throughput?

45. In Avro, removing or adding a field that has a default is a __ schema evolution

46. You have a consumer group of 12 consumers and when a consumer gets killed by the process management system, rather abruptly, it does not trigger a graceful shutdown of your consumer. Therefore, it takes up to 10 seconds for a rebalance to happen. The business would like to have a 3 seconds rebalance time .

What should you do? (select two)

47. A consumer starts and has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 643 for the topic before.

Where will the consumer read from?

48. You are receiving orders from different customer in an "orders" topic with multiple partitions. Each message has the customer name as the key. There is a special customer named ABC that generates a lot of orders and you would like to reserve a partition exclusively for ABC. The rest of the message should be distributed among other partitions .

How can this be achieved?

49. A Zookeeper ensemble contains 5 servers .

What is the maximum number of servers that can go missing and the ensemble still run?

50. If I want to send binary data through the REST proxy, it needs to be base64 encoded .

Which component needs to encode the binary data into base 64?

51. Which of the following is not an Avro primitive type?

52. An ecommerce website maintains two topics - a high volume "purchase" topic with 5 partitions and low volume "customer" topic with 3 partitions. You would like to do a stream-table join of these topics .

How should you proceed?

53. A topic "sales" is being produced to in the Americas region. You are mirroring this topic using Mirror Maker to the European region. From there, you are only reading the topic for analytics purposes .

What kind of mirroring is this?

54. In Kafka, every broker... (select three)

55. What is true about partitions? (select two)

56. A consumer has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group never committed offsets for the topic before.

Where will the consumer read from?

57. You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon application restart, it takes a long time for the docker container to replicate the state and get back to processing the data .

How can you improve dramatically the application restart?

58. What isn't a feature of the Confluent schema registry?

59. A producer just sent a message to the leader broker for a topic partition. The producer used acks=1 and therefore the data has not yet been replicated to followers.

Under which conditions will the consumer see the message?

60. To continuously export data from Kafka into a target database, I should use


 

Add a Comment

Your email address will not be published. Required fields are marked *