Skip to main content

Explore Machine Learning Models with Explainable AI: Challenge Lab [GSP324]


Link to the video: https://youtu.be/gIJmNrIwR4k


STEPS

OPEN "Notebooks" (AI Platform)

New instance


Select "Tensorflow enterprise 2.3" > without GPUs


Region: us-central1 (lowa)

Zone: us-central1-a


create


open jupyterlab


open terminal


git clone https://github.com/GoogleCloudPlatform/training-data-analyst


Go to the enclosing folder: training-data-analyst/quests/dei.


Open the notebook file what-if-tool-challenge.ipynb.


Download and import the dataset hmda_2017_ny_all-records_labels.


*to run the cells either you can press run button or press shift+enter in each cell*

[*] - this means the cell is in running state



#this is the size of the array you'll be feeding into our model for each example


model = Sequential()

model.add(layers.Dense(200, input_shape=(input_size,), activation='relu'))

model.add(layers.Dense(50, activation='relu'))

model.add(layers.Dense(20, activation='relu'))

model.add(layers.Dense(1, activation='sigmoid'))

model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

model.fit(train_data, train_labels, epochs=10, batch_size=2048, validation_split=0.1)


#Train your second model on the Limited dataset. Use `Limited_train_data` fir your data and `Limited_train_labels`


limited_model = Sequential()

limited_model.add(layers.Dense(200, input_shape=(input_size,), activation='relu'))

limited_model.add(layers.Dense(50, activation='relu'))

limited_model.add(layers.Dense(20, activation='relu'))

limited_model.add(layers.Dense(1, activation='sigmoid'))

limited_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])

limited_model.fit(limited_train_data, limited_train_labels, epochs=10, batch_size=2048, validation_split=0.1)




Go to cloud storage create bucket and name it by your gcp project id





#Fill out this information

GCP_PROJECT = <PUT_PROJECT_ID>

MODEL_BUCKET = 'gs://<PUT_PROJECT_ID>'




Create the Complete AI Platform model

Use the following to create your first AI Platform model:


Model Name = complete_model

Version Name = v1

Python version = 3.7

Framework = TensorFlow

Framework version = 2.3.1

ML Runtime version = 2.3


Create the Limited AI Platform model

Use the following to create your second AI Platform model:


Model Name = limited_model

Version Name = v1

Python version = 3.7

Framework = TensorFlow

Framework version = 2.3.1

ML Runtime version = 2.3


Congratulations you have completed the lab!

SAKURA SATIO



Comments

Popular posts from this blog

A Tour of Google Cloud Hands-on Labs [GSP282]

Link to the Lab:  https://www.cloudskillsboost.google/focuses/2794?parent=catalog Link to the Video:  https://youtu.be/KFjyI-o60W8 Link to the Channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA Answers are marked bold and are highlighted in light yellow This builds a temporary environment in Google Cloud. Start lab (button) Credit Time Score When the timer reaches 00:00:00, you will lose access to your temporary Google Cloud environment. False True Some labs have tracking, which scores your completion of hands-on lab activities. False True In order to receive completion credit for a lab that has tracking, you must complete the required hands-on lab activities. False True What field is NOT found in the left pane? Project ID System admin Password Open Google Console The username in the left panel, which resembles googlexxxxxx_student@qwiklabs.net, is a Cloud IAM identity. True False An organizing entity for anything you build with Google Cloud. Password...

Insights from Data with BigQuery: Challenge Lab [GSP787]

Link to the Lab:  https://www.cloudskillsboost.google/focuses/11988?parent=catalog Link to the Video:  https://youtu.be/jdZd834oysE Link to the Channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA STEPS 1.     In the Cloud Console, navigate to Menu > BigQuery . 2.     Click + ADD DATA > Explore public datasets from the left pane. 3.     Search covid19_open_data and then select COVID-19 Open Data 4.     Use Filter to locate the table covid19_open_data under the covid19_open_data dataset. Query 1: Total Confirmed Cases SELECT    SUM ( cumulative_confirmed ) AS total_cases_worldwide FROM    `bigquery-public-data.covid19_open_data.covid19_open_data` WHERE    date = "2020-04-15" Query 2: Worst Affected Areas with deaths_by_states as ( SELECT subregion1_name as state, sum ( cumulative_deceased ) as death_count FROM `bigquery-public-data.covid19_open_data.covid19_open_da...

Deploy and Manage Cloud Environments with Google Cloud: Challenge Lab [GSP314]

Link to the Lab:  https://google.qwiklabs.com/focuses/10417?parent=catalog Link to the youtube video:  https://youtu.be/eraSK1T45Do Link to my youtube channel:  https://www.youtube.com/channel/UCiWP5KYh4MWj-yp3j-DMIrA   DO NOT START WITHOUT LAB RESOURCES! DO NOT MAKE HASTE. COPY PASTE STEP BY STEP OTHERWISE IT WILL NOT WORK. TASK 1: GO TO COMPUTE ENGINE > VM INSTANCES > OPEN SSH OF kraken-jumphost TYPE THE COMMANDS IN SSH: cd /work/dm sed -i s/SET_REGION/us-east1/g prod-network.yaml gcloud deployment-manager deployments create prod-network --config=prod-network.yaml gcloud config set compute/zone us-east1-b gcloud container clusters create kraken-prod --num-nodes 2 --network kraken-prod-vpc --subnetwork kraken-prod-subnet --zone us-east1-b gcloud container clusters get-credentials kraken-prod cd /work/k8s for F in $(ls *.yaml); do kubectl create -f $F; done