crackyourinterview.com


Google Professional Cloud Architect Exam Page 9(Dumps)


Question No:-81

Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.

You want analysts from each country to be able to see and query only the data for their respective countries.

How should you configure the access rights?


  1. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
  2. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group.
  3. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group.
  4. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.

 






Question No:-82

Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month.
What should you do?


  1. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user.
  2. In the BigQuery interface, execute a query on the JOBS table to get the required information.
  3. Use 'bq show' to list all jobs. Per job, use 'bq ls' to list job information and get the required information.
  4. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.

 





Question No:-83

You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for new VMs in the instance group.

What should you do?


  1. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies.
  2. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.
  3. Use Puppet to create the managed instance group and install the OS package dependencies.
  4. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.

 





Question No:-84

Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.

How should you design to meet Google best practices?


  1. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
  2. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
  3. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
  4. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.

 





Question No:-85

Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.

What should you do?


  1. Engage with a security company to run web scrapers that look your for users' authentication data om malicious websites and notify you if any is found.
  2. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
  3. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
  4. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.

 





Question No:-86

You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.

What should you do?


  1. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
  2. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
  3. Deploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the update and current applications.
  4. Deploy the update as a new App Engine application, and use Google's global HTTP load balancing to split traffic between the new and current applications.

 









Question No:-87

Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.

How should you collect these logs from both VMs and services?


  1. All admin and VM system logs are automatically collected by Stackdriver.
  2. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.
  3. Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.
  4. Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.

 





Question No:-88

You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.

Which of the following best reflects your recommendations for a cost-effective storage allocation?


  1. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
  2. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
  3. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
  4. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.

 





Question No:-89

Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.

Which feature of Kubernetes should you use to accomplish this?


  1. StatefulSets
  2. Role-based access control
  3. Container environment variables
  4. Persistent Volumes

 





Question No:-90

You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.

What should you do?


  1. Customize the cache keys to omit the protocol from the key.
  2. Shorten the expiration time of the cached objects.
  3. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users.
  4. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.

 




1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | ...















@2014-2022 Crackyourinterview (All rights reserved)
Privacy Policy - Disclaimer - Sitemap