Google Professional Cloud Architect Exam Page 6(Dumps)
Question No:-51
|
You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do?
1. Point gcloud datastore create-indexes to your configuration file
2. Upload the configuration file to App Engine's default Cloud Storage bucket, and have App Engine detect the new indexes
3. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file
4. Create an HTTP request to the built-in python module to send the index configuration file to your application
Answer:-1. Point gcloud datastore create-indexes to your configuration file
|
|
Question No:-52
|
You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do?
1. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
2. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
3. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
4. Deploy the application on two Compute Engine instance groups, each in a separate project and a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
Answer:-3. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
|
|
Question No:-53
|
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public internet. What should you do?
1. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
2. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.
3. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
4. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
Answer:-4. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
|
|
Question No:-54
|
You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install the software?
1. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
2. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil.
3. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud.
4. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
Answer:-1. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil.
|
|
Question No:-55
|
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do?
1. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
2. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
3. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.
4. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
Answer:-1. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
|
|
Question No:-56
|
You have an application deployed on Google Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do?
1. Use kubectl set image deployment/echo-deployment
2. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster
3. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ג€"f
4. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ג€"f
Answer:-1. Use kubectl set image deployment/echo-deployment
|
|
Question No:-57
|
Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users' access roles?
1. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
2. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
3. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
4. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
Answer:-3. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
|
|
Question No:-58
|
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
1. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.
2. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
3. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity.
4. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.
Answer:-2. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
|
|
Question No:-59
|
Your web application must comply with the requirements of the European Union's General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you do?
1. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides ג€pass-onג€ compliance when you use native features.
2. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
3. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
4. Define a design for the security of data in your web application that meets GDPR requirements.
|
Question No:-60
|
You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data center outage in any of the zones within a GCP region. What should you do?
1. Configure a Cloud SQL instance with high availability enabled.
2. Configure a Cloud Spanner instance with a regional instance configuration.
3. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.
4. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Answer:-4. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
|
|
|