Skip to main content

Ensure Access & Identity in Google Cloud: Challenge Lab [GSP342]



STEPS

Task 1:


gcloud config set compute/zone us-east1-b

nano role-definition.yaml



*insert whatever is given below*


title: "orca_storage_update"

description: "Permissions"

stage: "ALPHA"

description: "Permissions" stage: "ALPHA" includedPermissions: - storage.buckets.get - storage.objects.get - storage.objects.list - storage.objects.update - storage.objects.create

 

To save a .yaml file press ctrl+x then press y then press enter

 

gcloud iam service-accounts create orca-private-cluster-sa --display-name "Orca Private Cluster Service Account"

 

gcloud iam roles create orca_storage_update \ --project $DEVSHELL_PROJECT_ID \ --file role-definition.yaml


Task 2 and 3:

 

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \ --member serviceAccount:orca-private-cluster-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.viewer

 

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \ --member serviceAccount:orca-private-cluster-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.metricWriter

 

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \ --member serviceAccount:orca-private-cluster-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/logging.logWriter

 

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \ --member serviceAccount:orca-private-cluster-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role projects/$DEVSHELL_PROJECT_ID/roles/orca_storage_update

 

Task 4: Create and configure a new Kubernetes Engine private cluster

 

gcloud container clusters create orca-test-cluster --num-nodes 1 --master-ipv4-cidr=172.16.0.64/28 --network orca-build-vpc --subnetwork orca-build-subnet --enable-master-authorized-networks  --master-authorized-networks 192.168.10.2/32 --enable-ip-alias --enable-private-nodes --enable-private-endpoint --service-account orca-private-cluster-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --zone us-east1-b

 

Task 5: Deploy an application to a private Kubernetes Engine cluster.

1. Navigate to the Compute Engine in the Cloud Console.

2. Click on the SSH button for the orca-jumphost instance.


In the SSH window, connect to the private cluster by running the following:

 

gcloud config set compute/zone us-east1-b

 

gcloud container clusters get-credentials orca-test-cluster --internal-ip

 

kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0


kubectl expose deployment hello-server --name orca-hello-service \ --type LoadBalancer --port 80 --target-port 8080


Congratulations you have completed the Challenge Lab!


SAKURA SATIO


Comments

Popular posts from this blog

A Tour of Google Cloud Hands-on Labs [GSP282]

Link to the Lab:  https://www.cloudskillsboost.google/focuses/2794?parent=catalog Link to the Video:  https://youtu.be/KFjyI-o60W8 Link to the Channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA Answers are marked bold and are highlighted in light yellow This builds a temporary environment in Google Cloud. Start lab (button) Credit Time Score When the timer reaches 00:00:00, you will lose access to your temporary Google Cloud environment. False True Some labs have tracking, which scores your completion of hands-on lab activities. False True In order to receive completion credit for a lab that has tracking, you must complete the required hands-on lab activities. False True What field is NOT found in the left pane? Project ID System admin Password Open Google Console The username in the left panel, which resembles googlexxxxxx_student@qwiklabs.net, is a Cloud IAM identity. True False An organizing entity for anything you build with Google Cloud. Password...

Deploy and Manage Cloud Environments with Google Cloud: Challenge Lab [GSP314]

Link to the Lab:  https://google.qwiklabs.com/focuses/10417?parent=catalog Link to the youtube video:  https://youtu.be/eraSK1T45Do Link to my youtube channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA   DO NOT START WITHOUT LAB RESOURCES! DO NOT MAKE HASTE. COPY PASTE STEP BY STEP OTHERWISE IT WILL NOT WORK. TASK 1: GO TO COMPUTE ENGINE > VM INSTANCES > OPEN SSH OF kraken-jumphost TYPE THE COMMANDS IN SSH: cd /work/dm sed -i s/SET_REGION/us-east1/g prod-network.yaml gcloud deployment-manager deployments create prod-network --config=prod-network.yaml gcloud config set compute/zone us-east1-b gcloud container clusters create kraken-prod --num-nodes 2 --network kraken-prod-vpc --subnetwork kraken-prod-subnet --zone us-east1-b gcloud container clusters get-credentials kraken-prod cd /work/k8s for F in $(ls *.yaml); do kubectl create -f $F; done

Perform Foundational Data, ML, and AI Tasks in Google Cloud: Challenge Lab [GSP323]

Link to the Lab:  https://www.cloudskillsboost.google/focuses/11044?parent=catalog Link to the Video:  https://youtu.be/lkI8WscliSs Link to the Channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA STEPS (will be unique for everyone) storage →  bucket → create bucket with your Cloud Storage Bucket Name . For Access Control select Fine-Grained → Press create bigquery →  select project →  create dataset →  name= BigQuery Dataset Name open cloud shell gsutil cp gs://cloud-training/gsp323/lab.csv . cat lab.csv gsutil cp gs://cloud-training/gsp323/lab.schema . cat lab.schema copy the bigquery schema and store it somewhere go to bigquery go to dataset lab create table create table from = google cloud storage select file from GCS bucket = cloud-training/gsp323/lab.csv table name = customers enable edit as text paste the bigquery schema that you copied earlier create Go to dataproc clusters create cluster → Cluster on Compute Engine → Create ...