option
Questions
ayuda
daypo
search.php

ERASED TEST, YOU MAY BE INTERESTED ON kafka_apache

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
kafka_apache

Description:
learn for apache kafka

Author:
AVATAR
adamantine
Other tests from this author

Creation Date: 03/01/2025

Category: Science

Number of questions: 50
Share the Test:
New CommentNuevo Comentario
No comments about this test.
Content:
Where are the dynamic configurations for a topic stored? In Zookeeper In an internal Kafka topic __topic_configuratins In server.properties On the Kafka broker file system .
What happens when broker.rack configuration is provided in broker configuration in Kafka cluster? You can use the same broker.id as long as they have different broker.rack configuration Replicas for a partition are placed in the same rack Replicas for a partition are spread across different racks Each rack contains all the topics and partitions, effectively making Kafka highly available .
What is the disadvantage of request/response communication? Scalability Reliability Coupling Cost.
is KSQL ANSI SQL compliant? Yes No.
When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. How will you fix the error? Set key.converter, value.converter to JsonConverter and the schema registry url Use Single Message Transforms to add schema and payload fields in the message Set key.converter.schemas.enable and value.converter.schemas.enable to false Set key.converter, value.converter to AvroConverter and the schema registry url.
There are 3 brokers in the cluster. You want to create a topic with a single partition that is resilient to one broker failure and one broker maintenance. What is the replication factor will you specify while creating the topic? 6 3 2 1.
Two consumers share the same group.id (consumer group id). Each consumer will Read mutually exclusive offsets blocks on all the partitions Read all the data on mutual exclusive partitions Read all data from all partitions.
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from? offset 45 offset 10 it will crash offset 2311.
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 2. How many brokers can go down before a producer with acks=all can't produce? 0 2 1 3.
Where are KSQL-related data and metadata stored? Kafka Topics Zookeeper PostgreSQL database Schema Registry .
You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3. How many tasks will you configure for the S3 connector? 10 6 3 2.
To enhance compression, I can increase the chances of batching by using acks=all linger.ms=20 batch.size=65536 max.message.size=10MB .
How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application? Call consumer.wakeUp() and catch a WakeUpException Call consumer.poll() in another thread Kill the consumer thread.
StreamsBuilder builder = new StreamsBuilder(); KStream<String, String> textLines = builder.stream("word-count-input"); KTable<String, Long> wordCounts = textLines .mapValues(textLine -> textLine.toLowerCase()) .flatMapValues(textLine -> Arrays.asList(textLine.split("\W+"))) .selectKey((key, word) -> word) .groupByKey() .count(Materialized.as("Counts")); wordCounts.toStream().to("word-count-output", Produced.with(Serdes.String(), Serdes.Long())); builder.build(); What is an adequate topic configuration for the topic word-count-output? max.message.bytes=10000000 cleanup.policy=delete compression.type=lz4 cleanup.policy=compact .
Where are the ACLs stored in a Kafka cluster by default? Inside the broker's data directory Under Zookeeper node /kafka-acl/ In Kafka topic __kafka_acls Inside the Zookeeper's data directory .
What kind of delivery guarantee this consumer offers? while (true) { ConsumerRecords<String, String> records = consumer.poll(100); try { consumer.commitSync(); } catch (CommitFailedException e) { log.error("commit failed", e) } for (ConsumerRecord<String, String> record records) { System.out.printf("topic = %s, partition = %s, offset = %d, customer = %s, country = %s ", record.topic(), record.partition(), record.offset(), record.key(), record.value()); } } Exactly-once At-least-once At-most-once.
The exactly once guarantee in the Kafka Streams is for which flow of data? Kafka => Kafka Kafka => External External => Kafka .
You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched? 3 2 1 6.
You want to perform table lookups against a KTable everytime a new record is received from the KStream. What is the output of KStream-KTable join? KTable GlobalKTable You choose between KStream or KTable Kstream.
You are doing complex calculations using a machine learning framework on records fetched from a Kafka topic. It takes more about 6 minutes to process a record batch, and he consumer enters rebalances even though it's still running. How can you improve this scenario? Increase max.poll.interval.ms to 600000 Increase heartbeat.interval.ms to 600000 Increase session.timeout.ms to 600000 Add consumers to the consumer group and kill them right away .
Which actions will trigger partition rebalance for a consumer group? (select three) Increase partitions of a topic Remove a broker from the cluster Add a new consumer to consumer group A consumer in a consumer group shuts down .
Which of the following setting increases the chance of batching for a Kafka Producer? Increase batch.size Increase message.max.bytes Increase the number of producer threads Increase linger.ms.
What data format isn't natively available with the Confluent REST Proxy? avro binary protobuf json.
You are using JDBC source connector to copy data from 2 tables to two Kafka topics. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched? 6 1 2 3.
To import data from external databases, I should use Confluent REST Proxy Kafka Connect Sink Kafka Streams Kafka Connect Source .
What is a generic unique id that I can use for messages I receive from a consumer? topic + partition + timestamp topic + partition + offset topic + timestamp .
What happens if you write the following code in your producer? producer.send(producerRecord).get() Compression will be increased Throughput will be decreased It will force all brokers in Kafka to acknowledge the producerRecord Batching will be increased.
Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction? After cleanup, only one message per key is retained with the first value Each message stored in the topic is compressed Kafka automatically de-duplicates incoming messages based on key hashes After cleanup, only one message per key is retained with the latest value Compaction changes the offset of messages .
Which of these joins does not require input topics to be sharing the same number of partitions? KStream-KTable join KStream-KStream join KStream-GlobalKTable KTable-KTable join .
How often is log compaction evaluated? Every time a new partition is created Every time a segment is closed Every time a message is sent to Kafka Every time a message is flushed to disk .
A Zookeeper ensemble contains 3 servers. Over which ports the members of the ensemble should be able to communicate in default configuration? (select three) 2181 3888 443 2888 9092 80.
A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets an exception Not Leader For Partition Exception in the response. How does client handle this situation? Get the Broker id from Zookeeper that is hosting the leader replica and send request to it Send metadata request to the same broker for the topic and select the broker hosting the leader replica Send metadata request to Zookeeper for the topic and select the broker hosting the leader replica Send fetch request to each Broker in the cluster.
If I supply the setting compression.type=snappy to my producer, what will happen? (select two) The Kafka brokers have to de-compress the data The Kafka brokers have to compress the data The Consumers have to de-compress the data The Consumers have to compress the data The Producers have to compress the data .
If I produce to a topic that does not exist, and the broker setting auto.create.topic.enable=true, what will happen? Kafka will automatically create the topic with 1 partition and 1 replication factor Kafka will automatically create the topic with the indicated producer settings num.partitions and default.replication.factor Kafka will automatically create the topic with the broker settings num.partitions and default.replication.factor Kafka will automatically create the topic with num.partitions=#of brokers and replication.factor=3 .
How will you read all the messages from a topic in your KSQL query? KSQL reads from the beginning of a topic, by default. KSQL reads from the end of a topic. This cannot be changed. Use KSQL CLI to set auto.offset.reset property to earliest.
The kafka-console-consumer CLI, when used with the default options uses a random group id always uses the same group id does not use a group id.
A producer is sending messages with null key to a topic with 6 partitions using the DefaultPartitioner. Where will the messages be stored? Partition 5 Any of the topic partitions The partition for the null key Partition 0.
Which of the following Kafka Streams operators are stateless? (select all that apply) map filter flatmap branch groupBy aggregate.
Suppose you have 6 brokers and you decide to create a topic with 10 partitions and a replication factor of 3. The brokers 0 and 1 are on rack A, the brokers 2 and 3 are on rack B, and the brokers 4 and 5 are on rack C. If the leader for partition 0 is on broker 4, and the first replica is on broker 2, which broker can host the last replica? (select two) 6 1 2 5 0 3.
Your topic is log compacted and you are sending a message with the key K and value null. What will happen? The broker will delete all messages with the key K upon cleanup The producer will throw a Runtime exception The broker will delete the message with the key K and null value only upon cleanup The message will get ignored by the Kafka broker.
A kafka topic has a replication factor of 3 and min.insync.replicas setting of 1. What is the maximum number of brokers that can be down so that a producer with acks=all can still produce to the topic? 3 0 2 1.
You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted? The broker will start, and other topics will also be deleted as the broker data on the disk got deleted The broker will start, and won't be online until all the data it needs to have is replicated from other leaders The broker will crash The broker will start, and won't have any data. If the broker comes leader, we have a data loss.
A consumer application is using KafkaAvroDeserializer to deserialize Avro messages. What happens if message schema is not present in AvroDeserializer local cache? Throws SerializationException Fails silently Throws DeserializationException Fetches schema from Schema Registry.
In the Kafka consumer metrics it is observed that fetch-rate is very high and each fetch is small. What steps will you take to increase throughput? Increase fetch.max.wait Increase fetch.max.bytes Decrease fetch.max.bytes Decrease fetch.min.bytes Increase fetch.min.bytes .
In Avro, removing or adding a field that has a default is a __ schema evolution full backward breaking forward.
You have a consumer group of 12 consumers and when a consumer gets killed by the process management system, rather abruptly, it does not trigger a graceful shutdown of your consumer. Therefore, it takes up to 10 seconds for a rebalance to happen. The business would like to have a 3 seconds rebalance time. What should you do? (select two) Increase session.timeout.ms Decrease session.timeout.ms Increase heartbeat.interval.ms decrease max.poll.interval.ms increase max.poll.interval.ms Decrease heartbeat.interval.ms.
A consumer starts and has auto.offset.reset=latest, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 643 for the topic before. Where will the consumer read from? it will crash offset 2311 offset 643 offset 45 .
You are receiving orders from different customer in an "orders" topic with multiple partitions. Each message has the customer name as the key. There is a special customer named ABC that generates a lot of orders and you would like to reserve a partition exclusively for ABC. The rest of the message should be distributed among other partitions. How can this be achieved? Add metadata to the producer record Create a custom partitioner All messages with the same key will go the same partition, but the same partition may have messages with different keys. It is not possible to reserve Define a Kafka Broker routing rule.
A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run? 3 4 2 1.
If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which component needs to encode the binary data into base 64? The Producer The Kafka Broker Zookeeper The REST Proxy .
Report abuse