Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
nmims post
Objective Type Set
Online MCQ Assignment
Question Solution
Solved Question
Uncategorized

Interview MCQ Set 1

1. iSCSI is mapping of
a) SCSI over TCP/IP
b) IP over SCSI
c) FC over IP
d) None of the mentioned

View Answer

Answer: a

2. iSCSI allows what type of access
a) block level
b) file level
c) both a & b
d) none of the mentioned

View Answer

Answer: a

3. iSCSI names are:
a) Globally unique
b) Local to the setup
c) permanent
d) temporary

View Answer

Answer: a , c

4.Which of the following is not true of iSCSI names?
a) iSCSI names are associated with iSCSI nodes(targets and initiators.
b) iSCSI names are associated with n/w adapter cards
c) iSCSI names are world wide unique.
d) iSCSI names are permanant.

View Answer

Answer: b

5. Which of the following is not a valid iSCSI name?
a) iqn.2001-04.com.mystorage:storage.tape1
b) iqn.2001-04.com.mystorage
c) iqn.01-04.com.example.disk
d) none of the mentioned.

View Answer

Answer: c

6. Which of the following is not a valid iSCSI name?
a) eui.1234098769785341
b) eui.4237098769785341
c) eui.12340987697853422.disk
d) none of the mentioned

View Answer

Answer: c

7. Discovery session in iSCSI is used for:
a) Discovering iSCSI targets and their TargetAddresses.
b) Probing Luns on iSCSI targets.
c) Either of above
d) None of the mentioned

View Answer

Answer: a

8. Which of the following are valid SendTargets commands?
a) SendTargets=iqn.2001-04.com.mystorage:storage.tape1
b) SendTargets=all
c) Both a. and b.
d) None of the mentioned

View Answer

Answer: c

9. iSCSI targets can be discovered by
a) SendTargets
b) Static configuration
c) using SLP/iSNS
d) All of the mentioned

View Answer

Answer: d

10. Which of the following is false?
a) iSCSI requires login from initiator to target
b) There can be multiple paths between initiator and target
c) Data integrity is ensured using digests
d) None of the mentioned

View Answer

Answer: d

Interview MCQ Set 2

1. __________ provides the functionality of a messaging system.
a) Oozie
b) Kafka
c) Lucene
d) BigTop

View Answer

Answer: b [Reason:] Kafka is a distributed, partitioned, replicated commit log service.

2. Point out the correct statement :
a) With kafka, more users, whether using SQL queries or BI applications, can interact with more data
b) A topic is a category or feed name to which messages are published
c) For each topic, the Kafka cluster maintains a partitioned log
d) None of the mentioned

View Answer

Answer: b [Reason:] Kafka is possible through a single repository and metadata store from source through analysis.

3. Kafka maintains feeds of messages in categories called :
a) topics
b) chunks
c) domains
d) messages

View Answer

Answer: a [Reason:] We’ll call processes that publish messages to a Kafka topic producers.

4. Kafka is run as a cluster comprised of one or more servers each of which is called :
a) cTakes
b) broker
c) test
d) none of the mentioned

View Answer

Answer: b [Reason:] We’ll call processes that subscribe to topics and process the feed of published messages consumers.

5. Point out the wrong statement :
a) The Kafka cluster does not retain all published messages
b) A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients
c) Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization
d) Messages are persisted on disk and replicated within the cluster to prevent data loss

View Answer

Answer: a [Reason:] The Kafka cluster retains all published messages—whether or not they have been consumed—for a configurable period of time.

6. Communication between the clients and the servers is done with a simple, high-performance, language agnostic _________ protocol.
a) IP
b) TCP
c) SMTP
d) ICMP

View Answer

Answer: b [Reason:] Java client is provided for Kafka, but clients are available in many languages.

7. The only metadata retained on a per-consumer basis is the position of the consumer in the log, called :
a) offset
b) partition
c) chunks
d) all of the mentioned

View Answer

Answer: a [Reason:] offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads messages

8. Each kafka partition has one server which acts as the _________
a) leaders
b) followers
c) staters
d) all of the mentioned

View Answer

Answer: a [Reason:] Each partition is replicated across a configurable number of servers for fault tolerance.

9. _________ has stronger ordering guarantees than a traditional messaging system.
a) kafka
b) Slider
c) Suz
d) None of the mentioned

View Answer

Answer: a [Reason:] A traditional queue retains messages in-order on the server.

10. Kafka only provides a _________ order over messages within a partition.
a) partial
b) total
c) 30%
d) none of the mentioned

View Answer

Answer: b [Reason:] Per-partition ordering combined with the ability to partition data by key is sufficient for most applications.

Interview MCQ Set 3

1. A “Logical Volume Manager” helps in
a) Virtualizing storage
b) provide direct access to the underlying storage
c) Manage disk space efficiently without having to know the actual hardware details
d) Both a & c
e. None of the mentioned

View Answer

Answer: d

2. Physical Volumes are
a) The space on a physical storage that represent a logical volume
b) Disk or disk partitions used to construct logical volumes
c) A bunch of disks put together that can be made into a logical volume
d) None of the mentioned

View Answer

Answer: b

3. Which of the following are true. Logical Volumes
a) Can span across multiple volume groups
b) Can span across multiple physical volumes
c) Can be constructed only using a single physical disk
d) None of the mentioned

View Answer

Answer: b

4. A logical Extent(LE) and Physical extent(PE) are related as follows
a) PE resides on a disk, whereas LE resides on a logical volume
b) LE is larger in size than a PE
c) LE’s are unique whereas PE’s are not
d) Every LE maps to a one and only one PE

View Answer

Answer: a,d

5. Which of the following statements are true
a) LVM is storage independent whereas a RAID system is limited to the storage subsystem
b) LVM provides snapshot feature
c) With LVM we can grow volumes to any size
d) RAID system can provide more storage space than a LVM

View Answer

Answer: a, b, c

6. LVM is independent of device IDs because
a) LVM uses it’s own device naming to identify a physical disk
b) LVM stores the volume management information on the disks that helps it reconstruct volumes
c) LVM is an abstraction layer over physical devices and does not need device ids
d) Device ids are used only by the device drivers

View Answer

Answer: b

7. Concatenation is the technique of
a) Adding physical volumes together to make a volume group
b) Filling up a physical volume completely before writing to the next one in a logical volume
c) writing a block of data onto one disk and then a block onto another disk in an alternate fashion
d) Increasing the size of a volume by adding more disks

View Answer

Answer: b

8. Which of the following is not a feature of LVM
a) Independent of disk location
b) Concatenation and striping of storage systems
c) Protection against disk failures
d) Snapshot capability

View Answer

Answer: c

9. LVM does not incur much performance overheads because
a) The writes/reads happen only to logical devices
b) The mapping of logical to physical storage is kept in RAM
c) The time lost in mapping devices is gained by writing to disks in parallel
d) None of the mentioned

View Answer

Answer: b

10. VGDA represents
a) The data stored on the logical volume
b) The data stored on physical volume
c) LVM configuration data stored on each physical volume
d) None of the mentioned

View Answer

Answer: c

Interview MCQ Set 4

1. ____________ specifies the number of segments on disk to be merged at the same time.
a) mapred.job.shuffle.merge.percent
b) mapred.job.reduce.input.buffer.percen
c) mapred.inmem.merge.threshold
d) io.sort.factor

View Answer

Answer: d [Reason:] io.sort.factor limits the number of open files and compression codecs during the merge.

2. Point out the correct statement :
a) The number of sorted map outputs fetched into memory before being merged to disk
b) The memory threshold for fetched map outputs before an in-memory merge is finished
c) The percentage of memory relative to the maximum heapsize in which map outputs may not be retained during the reduce
d) None of the mentioned

View Answer

Answer: a [Reason:] When the reduce begins, map outputs will be merged to disk until those that remain are under the resource limit this defines.

3. Map output larger than ___ percent of the memory allocated to copying map outputs.
a) 10
b) 15
c) 25
d) 35

View Answer

Answer: c [Reason:] Map output will be written directly to disk without first staging through memory.

4. Jobs can enable task JVMs to be reused by specifying the job configuration :
a) mapred.job.recycle.jvm.num.tasks
b) mapissue.job.reuse.jvm.num.tasks
c) mapred.job.reuse.jvm.num.tasks
d) all of the mentioned

View Answer

Answer: b [Reason:] Many of my tasks had performance improved over 50% using mapissue.job.reuse.jvm.num.tasks.

5. Point out the wrong statement :
a) The task tracker has local directory to create localized cache and localized job
b) The task tracker can define multiple local directories
c) The Job tracker cannot define multiple local directories
d) None of the mentioned

View Answer

Answer: d [Reason:] When the job starts, task tracker creates a localized job directory relative to the local directory specified in the configuration.

6. During the execution of a streaming job, the names of the _______ parameters are transformed.
a) vmap
b) mapvim
c) mapreduce
d) mapred

View Answer

Answer: d [Reason:] To get the values in a streaming job’s mapper/reducer use the parameter names with the underscores.

7. The standard output (stdout) and error (stderr) streams of the task are read by the TaskTracker and logged to :
a) ${HADOOP_LOG_DIR}/user
b) ${HADOOP_LOG_DIR}/userlogs
c) ${HADOOP_LOG_DIR}/logs
d) None of the mentioned

View Answer

Answer: b [Reason:] The child-jvm always has its current working directory added to the java.library.path and LD_LIBRARY_PATH.

8. ____________ is the primary interface by which user-job interacts with the JobTracker.
a) JobConf
b) JobClient
c) JobServer
d) All of the mentioned

View Answer

Answer: b [Reason:] JobClient provides facilities to submit jobs, track their progress, access component-tasks’ reports and logs, get the MapReduce cluster’s status information and so on.

9. The _____________ can also be used to distribute both jars and native libraries for use in the map and/or reduce tasks.
a) DistributedLog
b) DistributedCache
c) DistributedJars
d) None of the mentioned

View Answer

Answer: b [Reason:] Cached libraries can be loaded via System.loadLibrary or System.load.

10. __________ is used to filter log files from the output directory listing.
a) OutputLog
b) OutputLogFilter
c) DistributedLog
d) DistributedJars
d) None of the mentioned

View Answer

Answer: b [Reason:] User can view the history logs summary in specified directory using the following command $ bin/hadoop job -history output-dir.

Interview MCQ Set 5

1. A ________ node acts as the Slave and is responsible for executing a Task assigned to it by the JobTracker.
a) MapReduce
b) Mapper
c) TaskTracker
d) JobTracker

View Answer

Answer: c [Reason:] TaskTracker receives the information necessary for execution of a Task from JobTracker, Executes the Task, and Sends the Results back to JobTracker.

2. Point out the correct statement :
a) MapReduce tries to place the data and the compute as close as possible
b) Map Task in MapReduce is performed using the Mapper() function
c) Reduce Task in MapReduce is performed using the Map() function
d) All of the mentioned

View Answer

Answer: a [Reason:] This feature of MapReduce is “Data Locality”.

3. ___________ part of the MapReduce is responsible for processing one or more chunks of data and producing the output results.
a) Maptask
b) Mapper
c) Task execution
d) All of the mentioned

View Answer

Answer: a [Reason:] Map Task in MapReduce is performed using the Map() function.

4. _________ function is responsible for consolidating the results produced by each of the Map() functions/tasks.
a) Reduce
b) Map
c) Reducer
d) All of the mentioned

View Answer

Answer: a [Reason:] Reduce function collates the work and resolves the results.

5. Point out the wrong statement :
a) A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner
b) The MapReduce framework operates exclusively on pairs
c) Applications typically implement the Mapper and Reducer interfaces to provide the map and reduce methods
d) None of the mentioned

View Answer

Answer: d [Reason:] The MapReduce framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

6. Although the Hadoop framework is implemented in Java , MapReduce applications need not be written in :
a) Java
b) C
c) C#
d) None of the mentioned

View Answer

Answer: a [Reason:] Hadoop Pipes is a SWIG- compatible C++ API to implement MapReduce applications (non JNITM based).

7. ________ is a utility which allows users to create and run jobs with any executables as the mapper and/or the reducer.
a) Hadoop Strdata
b) Hadoop Streaming
c) Hadoop Stream
d) None of the mentioned

View Answer

Answer: b [Reason:] Hadoop streaming is one of the most important utilities in the Apache Hadoop distribution.

8. __________ maps input key/value pairs to a set of intermediate key/value pairs.
a) Mapper
b) Reducer
c) Both Mapper and Reducer
d) None of the mentioned

View Answer

Answer: a [Reason:] Maps are the individual tasks that transform input records into intermediate records.

9. The number of maps is usually driven by the total size of :
a) inputs
b) outputs
c) tasks
d) None of the mentioned

View Answer

Answer: a [Reason:] Total size of inputs means total number of blocks of the input files.

10. _________ is the default Partitioner for partitioning key space.
a) HashPar
b) Partitioner
c) HashPartitioner
d) None of the mentioned

View Answer

Answer: c [Reason:] The default partitioner in Hadoop is the HashPartitioner which has a method called getPartition to partition.