storage → bucket → create bucket with your Cloud Storage Bucket Name. For Access Control select Fine-Grained → Press create
bigquery → select project → create dataset → name= BigQuery Dataset Name
open cloud shell
gsutil cp gs://cloud-training/gsp323/lab.csv .
cat lab.csv
gsutil cp gs://cloud-training/gsp323/lab.schema .
cat lab.schema
copy the bigquery schema and store it somewhere
go to bigquery
go to dataset lab
create table
create table from = google cloud storage
select file from GCS bucket = cloud-training/gsp323/lab.csv
table name = customers
enable edit as text
paste the bigquery schema that you copied earlier
create
Go to dataproc
clusters
create cluster → Cluster on Compute Engine → Create
Location → Region
leave the rest default values and press create
click on the cluster name
go to vm instances
open ssh with the same name as the cluster with role "master"
run command in ssh:
hdfs dfs -cp gs://cloud-training/gsp323/data.txt /data.txt
Go to dataflow
Go to jobs
create job
job name = lab-transform
region = Region
dataflow template = Text Files on Cloud Storage to BigQuery [BATCH]
Javascript UDF path in CLoud Storage = gs://cloud-training/gsp323/lab.js
JSON path = cloud-training/gsp323/lab.schema
JavaScript UDF name = transform
BigQuery output table = BigQuery Output Table (unique for everyone)
Cloud Storage input path = gs://cloud-training/gsp323/lab.csv
Temporary BigQuery directory = Cloud Storage Bucket Name/bigquery_temp
Temporary location = Cloud Storage Bucket Name/temp
RUN JOB
HERE YOUR TASK 1 IS COMPLETED
Go to dataproc
jobs
submit job
region = Region
job type = spark
Main class or jar = org.apache.spark.examples.SparkPageRank
Arguments = /data.txt
Jar files = file:///usr/lib/spark/examples/jars/spark-examples.jar
SUBMIT
HERE YOUR TASK 2 IS COMPLETED
Go to dataprep
accept all the conditions and move ahead
Create a blank flow
Connect to your data
import datasets
GCS
gs://cloud-training/gsp323/runs.csv [in search box type this]
runs.csv
import and add to flow
edit recipe
column 9
right click → filter rows → on column values → contains
patterns to match = /(^0$|^0\.0$)/
column 2 rename = runid
column 3 rename = userid
column 4 rename = labid
column 5 rename = lab_title
column 6 rename = start
column 7 rename = end
column 8 rename = time
column 9 rename = score
column 10 rename = state
RUN JOB
again RUN JOB
HERE YOUR TASK 3 IS COMPLETED
Go to cloud shell
gcloud iam service-accounts create my-natlang-sa \ --display-name "my natural language service account" gcloud iam service-accounts keys create ~/key.json \ --iam-account my-natlang-sa@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com
export GOOGLE_APPLICATION_CREDENTIALS="/home/$USER/key.json"
gcloud auth activate-service-account my-natlang-sa@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud ml language analyze-entities --content="Old Norse texts portray Odin as one-eyed and long-bearded, frequently wielding a spear named Gungnir and wearing a cloak and a broad hat." > result.json
gcloud auth login (optional)
Copy the token from the link provided (optional)
gsutil cp result.json <CNL LINK>
if the command above gave error: go to IAM
Search for Principal name: PROJECT_ID@PROJECT_ID.iam.gserviceaccount.com
Add Role: Storage Object Admin
Re-run the above command. It will work fine.
nano request.json
👇👇👇👇👇👇👇👇👇
{ "config": { "encoding":"FLAC", "languageCode": "en-US" }, "audio": { "uri":"gs://cloud-training/gsp323/task4.flac" } }
curl -s -X POST -H "Content-Type: application/json" --data-binary @request.json \ "https://speech.googleapis.com/v1/speech:recognize?key=${API_KEY}" > result.json
gsutil cp result.json <GCS LINK>
gcloud iam service-accounts create quickstart
gcloud iam service-accounts keys create key.json --iam-account quickstart@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com
gcloud auth activate-service-account --key-file key.json
export ACCESS_TOKEN=$(gcloud auth print-access-token)
nano request.json
👇👇👇👇👇👇👇👇👇
{ "inputUri":"gs://spls/gsp154/video/chicago.mp4", "features": [ "TEXT_DETECTION" ] }
curl -s -H 'Content-Type: application/json' \ -H "Authorization: Bearer $ACCESS_TOKEN" \ 'https://videointelligence.googleapis.com/v1/videos:annotate' \ -d @request.json
curl -s -H 'Content-Type: application/json' -H "Authorization: Bearer $ACCESS_TOKEN" 'https://videointelligence.googleapis.com/v1/operations/OPERATION_FROM_PREVIOUS_REQUEST' > result1.json
gsutil cp result1.json <GVI LINK>
Congratulations you have completed the Challenge Lab!
Comments
Post a Comment