Google Professional Cloud Architect Exam Page 8(Dumps)
Question No:-71
|
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications?
1. Cloud Spanner, because it is globally distributed
2. Cloud SQL, because it is a fully managed relational database
3. Cloud Firestore, because it offers real-time synchronization across devices
4. BigQuery, because it is designed for large-scale processing of tabular data
|
Question No:-72
|
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post- mortem. What should you do?
1. Use gcloud sql instances restart.
2. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role.
3. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
4. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.
Answer:-3. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
|
|
Question No:-73
|
Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application's performance. What should you do?
1. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template.
2. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.
3. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
4. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
Answer:-3. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
|
|
Question No:-74
|
Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don't want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict communications?
1. Use separate VPCs to restrict traffic
2. Use firewall rules based on network tags attached to the compute instances
3. Use Cloud DNS and only allow connections from authorized hostnames
4. Use service accounts and configure the web application to authorize particular service accounts to have access
Answer:-2. Use firewall rules based on network tags attached to the compute instances
|
|
Question No:-75
|
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master.
C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag.
D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.
Answer:-A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
|
|
Question No:-76
|
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services?
1. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
2. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.
3. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
4. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
Answer:-1. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
|
|
Question No:-77
|
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.
How should you deploy the VPN?
1. Use VPC Network Peering between the VPC and the on-premises network.
2. Expose the VPC to the on-premises network using IAM and VPC Sharing.
3. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
4. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
Answer:-4. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
|
|
Question No:-78
|
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
1. Configure the expiration time for your tables at 45 days
2. Make the tables time-partitioned, and configure the partition expiration at 45 days
3. Rely on BigQuery's default behavior to prune application logs older than 45 days
4. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days
Answer:-2. Make the tables time-partitioned, and configure the partition expiration at 45 days
|
|
Question No:-79
|
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
1. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
2. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command.
3. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command.
4. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
1. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
|
|
Question No:-80
|
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.
What should you do?
1. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
2. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
3. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
4. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.
Answer:-2. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
|
|
|