Test information:
Number of questions: 55
Time allowed in minutes: 90
Required passing score: 60%
Languages: English
This test consists of 5 sections containing a total of 55 multiple-choice questions. The percentages after each section title reflect the approximate distribution of the total question set across the sections.
Requirements (16%)
Define the input data structure
Define the outputs
Define the security requirements
Define the requirements for replacing and/or merging with existing business solutions
Define the solution to meet the customer’s SLA
Define the network requirements based on the customer’s requirements
Use Cases (46%)
Determine when a cloud based solution is more appropriate vs. in-house (and migration plans from one to the other)
Demonstrate why Cloudant would be an applicable technology for a particular use case
Demonstrate why SQL or NoSQL would be an applicable technology for a particular use case
Demonstrate why Open Data Platform would be an applicable technology for a particular use case
Demonstrate why BigInsights would be an applicable technology for a particular use case
Demonstrate why BigSQL would be an applicable technology for a particular use case
Demonstrate why Hadoop would be an applicable technology for a particular use case
Demonstrate why BigR and SPSS would be an applicable technology for a particular use case
Demonstrate why BigSheets would be an applicable technology for a particular use case
Demonstrate why Streams would be an applicable technology for a particular use case
Demonstrate why Netezza would be an applicable technology for a particular use case
Demonstrate why DB2 BLU would be an applicable technology for a particular use case
Demonstrate why GPFS/HPFS would be an applicable technology for a particular use case
Demonstrate why Spark would be an applicable technology for a particular use case
Demonstrate why YARN would be an applicable technology for a particular use case
Applying Technologies (16%)
Define the necessary technology to ensure horizontal and vertical scalability
Determine data storage requirements based on data volumes
Design a data model and data flow model that will meet the business requirements
Define the appropriate Big Data technology for a given customer requirement (e.g. Hive/HBase or Cloudant)
Define appropriate storage format and compression for given customer requirement
Recoverability (11%)
Define the potential need for high availability
Define the potential disaster recovery requirements
Define the technical requirements for data retention
Define the technical requirements for data replication
Define the technical requirements for preventing data loss
Infrastructure (11%)
Define the hardware and software infrastructure requirements
Design the integration of the required hardware and software components
Design the connectors / interfaces / API’s between the Big Data solution and the existing systems
Job Role Description / Target Audience
The Big Data Architect works closely with the customer and the solutions architect to translate the customer’s business requirements into a Big Data solution. The Big Data Architect has deep knowledge of the relevant technologies, understands the relationship between those technologies, and how they can be integrated and combined to effectively solve any given big data business problem. This individual has the ability to design large-scale data processing systems for the enterprise and provide input on the architectural decisions including hardware and software. The Big Data Architect also understands the complexity of data and can design systems and models to handle different data variety including (structured, semi-structured, unstructured), volume, velocity (including stream processing), and veracity. The Big Data Architect is also able to effectively address information governance and security challenges associated with the system.
Recommended Prerequisite Skills
Understand the data layer and particular areas of potential challenge/risk in the data layer
Ability to translate functional requirements into technical specifications.
Ability to take overall solution/logical architecture and provide physical architecture.
Understand Cluster Management
Understand Network Requirements
Understand Important interfaces
Understand Data Modeling
Ability to identify/support non-functional requirements for the solution
Understand Latency
Understand Scalability
Understand High Availability
Understand Data Replication and Synchronization
Understand Disaster Recovery
Understand Overall performance (Query Performance, Workload Management, Database Tuning)
Propose recommended and/or best practices regarding the movement, manipulation, and storage of data in a big data solution (including, but not limited to:
Understand Data ingestion technical options
Understand Data storage options and ramifications (for example , understand the additional requirements and challenges introduced by data in the cloud)
Understand Data querying techniques & availability to support analytics
Understand Data lineage and data governance
Understand Data variety (social, machine data) and data volume
Understand/Implement and provide guidance around data security to support implementation, including but not limited to:
Understand LDAP Security
Understand User Roles/Security
Understand Data Monitoring
Understand Personally Identifiable Information (PII) Data Security considerations
Software areas of central focus:
BigInsights
BigSQL
Hadoop
Cloudant (NoSQL)
Software areas of peripheral focus:
Information Server
Integration with BigInsights, Balanced Optimization for Hadoop, JAQL Push down capability, etc
Data Governance
Security features of BigInsights
Information Server (MetaData Workbench for Lineage)
Optim Integration with BigInsights (archival)
DataClick for BigInsights (Future: DataClick for Cloudant – to pull operation data into Hadoop for Analytics – scripts available today)
BigMatch (trying to get to a single view)
Guardium (monitoring)
Analytic Tools (SPSS)
BigSheets
Support in Hadoop/BigInsights
Data Availability and Querying Support
Streams
Interface/Integration with BigInsights
Streaming Data Concepts
In memory analytics
Netezza
DB2 BLU
Graph Databases
Machine Learning (System ML)
QUESTION 1
What are the two levels documented in the Operational Model? (Choose two.)
A. Logical
B. Rational
C. Theoretical
D. Physical
E. Middleware
Answer: A,C
QUESTION 2
The inputs to the Architectural Overview document do NOT include which of the following?
A. Architectural Goals
B. Key Concepts
C. Architectural Overview Diagram
D. Component Model
Answer: D
QUESTION 3
The downside of cloud computing, relative to SLAs, is the difficulty in determining which of the following?
A. Root cause for service interruptions
B. Turn-Around-Time (TAT)
C. Mean Time To Recover (MTTR)
D. First Call Resolution (FCR)
Answer: A
Explanation:
References: https://en.wikipedia.org/wiki/Service-level_agreement
QUESTION 4
“The programming model for client developers will hide the complexity of interfacing to legacy systems” is an example of which of the following?
A. A use case
B. An architectural decision
C. A client imperative
D. An empathy statement
Answer: B
QUESTION 5
Which of the following is the artifact that assists in ensuring that the project is on the right path toward success?
A. Component Model
B. Empathy Map
C. Viability Assessment
D. Opportunity Plan
Answer: C
Click here to view complete Q&A of C2090-102 exam
Certkingdom Review
Best IBM C2090-102 Certification, IBM C2090-102 Training at certkingdom.com