AWS Certified SysOps Administrator Exam Practice Questions Page5(Dump)
Question No:-41
|
What are characteristics of Amazon S3? (Choose two.)
1. Objects are directly accessible via a URL
2. S3 should be used to host a relational database
3. S3 allows you to store objects or virtually unlimited size
4. S3 allows you to store virtually unlimited amounts of data
5. S3 offers Provisioned IOPS
https://aws.amazon.com/s3/faqs/
|
|
Question No:-42
|
You receive a frantic call from a new DBA who accidentally dropped a table containing all your customers.
Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of when the mistake was made?
1. Multi-AZ RDS
2. RDS snapshots
3. RDS read replicas
4. RDS automated backup
|
Question No:-43
|
A media company produces new video files on-premises every day with a total size of around 100 GBS after compression All files have a size of 1 -2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am Current upload takes almost 3 hours, although less than half of the available bandwidth is used.
What step(s) would ensure that the file uploads are able to complete in the allotted time window?
1. Increase your network bandwidth to provide faster throughput to S3
2. Upload the files in parallel to S3
3. Pack all files into a single archive, upload it to S3, then extract the files in AWS
4. Use AWS Import/Export to transfer the video files
|
Question No:-44
|
You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL.
Which security measures fall into AWS's responsibility?
1. Protect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access
2. Protect against IP spoofing or packet sniffing
3. Assure all communication between EC2 instances and ELB is encrypted
4. Install latest security patches on ELB. RDS and EC2 instances
|
Question No:-45
|
You use S3 to store critical data for your company Several users within your group currently have lull permissions to your S3 buckets You need to come up with a solution mat does not impact your users and also protect against the accidental deletion of objects.
Which two options will address this issue? (Choose two.)
1. Enable versioning on your S3 Buckets
2. Configure your S3 Buckets with MFA delete
3. Create a Bucket policy and only allow read only permissions to all users at the bucket level
4. Enable object life cycle policies and configure the data older than 3 months to be archived in Glacier
|
Question No:-46
|
An organization's security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3.
Which option should you implement to ensure this requirement is met?
1. Use the S3 copy API to replicate data between two S3 buckets in different regions
2. You do not need to implement anything since S3 data is automatically replicated between regions
3. Use the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
4. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
Answer:-4. You do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
Note:-
You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region.
Reference:-
https://aws.amazon.com/s3/faqs/
|
|
Question No:-47
|
You are tasked with setting up a cluster of EC2 Instances for a NoSQL database. The database requires random read I/O disk performance up to a 100,000 IOPS at 4KB block side per node.
Which of the following EC2 instances will perform the best for this workload?
1. A High-Memory Quadruple Extra Large (m2.4xlarge) with EBS-Optimized set to true and a PIOPs EBS volume
2. A Cluster Compute Eight Extra Large (cc2.8xlarge) using instance storage
3. High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage
4. A Cluster GPU Quadruple Extra Large (cg1.4xlarge) using four separate 4000 PIOPS EBS volumes in a RAID 0 configuration
Answer:-3. High I/O Quadruple Extra Large (hi1.4xlarge) using instance storage
Note:-
The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between
10,000 and 85,000 random write IOPS, both with 4K blocks.
For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS.
Reference:-
https://aws.amazon.com/blogs/aws/new-high-io-ec2-instance-type-hi14xlarge/
|
|
Question No:-48
|
When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?
1. Data will be deleted and win no longer be accessible
2. Data is automatically saved in an EBS volume.
3. Data is automatically saved as an EBS snapshot
4. Data is unavailable until the instance is restarted
|
Question No:-49
|
Your team Is excited about the use of AWS because now they have access to programmable Infrastructure" You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time (development test QA. production).
Which approach addresses this requirement?
1. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
2. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
3. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
4. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
Answer:-4. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
Note:-
OpsWorks for Chef Automate automatically performs updates for new Chef minor versions.
OpsWorks for Chef Automate does not perform major platform version updates automatically (for example, a major new platform version such as Chef Automate
13) because these updates might include backward-incompatible changes and require additional testing. In these cases, you must manually initiate the update.
Reference:-
https://aws.amazon.com/opsworks/chefautomate/faqs/
|
|
Question No:-50
|
You have a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact.
Which of the following backup methods will best fulfill your requirements?
1. Take periodic snapshots of the EBS volume
2. Use a third party Incremental backup application to back up to Amazon Glacier
3. Periodically back up all data to a single compressed archive and archive to Amazon S3 using a parallelized multi-part upload
4. Create another EBS volume in the second Availability Zone attach it to the Amazon EC2 instance, and use a disk manager to mirror me two disks
|
|