diff --git a/LICENSE b/LICENSE index 88724da..2135d1b 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2023 Hiiruki +Copyright (c) 2023 Lemniskett Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index 94cef99..2c25210 100644 --- a/README.md +++ b/README.md @@ -1,46 +1,8 @@ -# hiiruki.dev +# lemniskett.dev _yet another personal website._ -[![Netlify Status](https://api.netlify.com/api/v1/badges/73730c94-7f54-47c9-bd39-054054829340/deploy-status)](https://app.netlify.com/sites/hiiruki/deploys "Netlify Status") - -This is my personal website. It's built with [Hugo](https://gohugo.io/) and hosted on [Netlify](https://www.netlify.com/) and using [Kamigo](https://github.com/hiiruki/hugo-Kamigo) theme. You can visit [here](https://hiiruki.dev). - -![light mode](.github/images/light_mode.webp#center "Light mode") -![dark mode](.github/images/dark_mode.webp#center "Dark mode") - -## Pagespeed Insights - -[Google Pagespeed Insights](https://pagespeed.web.dev/analysis/https-hiiruki-dev/rqaiq47qyp?form_factor=mobile) score for this website. - -#### Mobile - -![mobile](.github/images/mobile.webp#center "Mobile") - -#### Desktop - -![desktop](.github/images/desktop.webp#center "Desktop") - -## Flow - -```mermaid -graph TD - -subgraph GitHub Repo - A[Website Code] --> B[Commit Changes] - B --> C[Push to Repo] -end - -subgraph Netlify CI/CD Pipeline - C --> D[Trigger CI/CD from main branch] - D --> E[Build with Hugo] - E --> F[Deploy to Netlify] -end - -subgraph Netlify Hosting - F --> G[Live Website] -end -``` +Forked from [Hiiruki's Personal Website](https://github.com/hiiruki/hiiruki.dev) ## License diff --git a/content/about.md b/content/about.md index 2acc681..275b2d4 100644 --- a/content/about.md +++ b/content/about.md @@ -1,128 +1,31 @@ --- title: About -description: $ cat /home/about +description: Self-explanatory hidemeta: true --- -> "Information is power. But like all power, there are those who want to keep it for themselves." -— [Aaron Swartz](https://en.wikipedia.org/wiki/Aaron_Swartz "Aaron Swartz @ Wikipedia") +I'm Syahrial Agni Prasetya, A Linux enthusiast with a deep passion for DevOps culture, Cloud, and Automation. -
- $ whoami -Hi! I'm echo 'RmlybWFuCg==' | base64 --decode 👋 -

+I have a good background in Linux and other UNIX/UNIX-like operating systems and have worked with Docker, Kubernetes, and some cloud providers like AWS, Azure, and some OpenStack providers. -Just an ordinary person who loves tech, games, anime, music, and other cool stuff. When I’m not on the text editor/terminal, I enjoy playing video games, watching movies or anime, and listening to music. +Developing and maintaining app infrastructures are part of my daily routines. In my free time, I love to tinker my home lab to try out new tools to improve infrastructures managed by myself. -
-Interests: -
Cyber security, GNU/Linux, *nix based systems, open source, -FOSS, privacy, OPSEC, DFIR, OSINT, CTF, threat intelligence, -reverse engineering, malware, cryptography, hardware hacking, -physical security, lockpicking sport, cloud computing, DevOps, -SysAdmin, SWE, SRE, operating systems, tildeverse, fediverse, -bioinformatics, biohacking, data mining, Jamstack, SSG, IoT, -blockchain, HPC, audiophile, mechanical keyboard, AI, ML, DL, -LLM, ACG (Anime, Comics, and Games), Extended Reality (XR), -3D design, ham radio, game development, science, cyberpunk, -cipherpunk, psychology, philosophy, minimalism, retrocomputing, -permacomputing, etc.
-

- -I started this blog to jot down things I've learned, mainly because I tend to forget stuff I picked up earlier. But hey, I've made it public, so you're welcome to give it a read and pick up things too. Sharing is caring, after all! ^^ - -
+Here you can find stuffs that I learned that made my life easier. Feel free to contact me about these stuffs. ### Contacts: -💬 [Matrix](https://matrix.to/#/@hiiruki:matrix.org "@hiiruki:matrix.org")
-💬 [Session](https://getsession.org/) - [Session ID](/session.txt "Session ID: 055b210e9f97217abf1872ed98af29640d9f5194847352975a6e9a3ea301683602")
-💬 [XMPP](https://en.wikipedia.org/wiki/XMPP "XMPP @ Wikipedia") - [hiiruki@yourdata.forsale](xmpp:hiiruki@yourdata.forsale) +[Telegram](https://lemniskett.space/users/lemniskett) -📡 [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat "IRC @ Wikipedia") - hiiruki @ [Libera.Chat](https://libera.chat/)
-📡 [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat "IRC @ Wikipedia") - hiiruki @ [Rizon](https://www.rizon.net/)
-📡 [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat "IRC @ Wikipedia") - hiiruki @ [tilde.chat](https://tilde.chat/)
+[Pleroma](https://lemniskett.space/users/lemniskett) -📧 [E-mail](mailto:h%69@hiiruki.dev) +[E-mail](mailto:syahrial@lemniskett.dev) + +>All my emails are digitally signed with PGP key: [4325F99CF01AB846](/pgp.txt). Do not trust emails from me that lack a valid digital signature.
- 🔑 PGP Public Key +Importing my public key ```shell -curl -sL https://hiiruki.dev/pgp | gpg --import - -# Fingerprint: [0xAF5886C8] • AEA5 B927 D7F0 D40B F4B3 C9F1 E40D 7521 AF58 86C8 +curl -sL https://lemniskett.dev/pgp.txt | gpg --import ``` - -[pgp.txt](/pgp.txt)
- -
- 🔑 SSH Public Key - -```shell -curl -sL https://hiiruki.dev/ssh | tee -a ~/.ssh/authorized_keys - -# Fingerprint: SHA256:uxJNkKzML7tBYwYdjzviimi/Nw4Nd8ghFpl2MOrYLnw -``` - -[ssh.txt](/ssh.txt) -
- -
- 🔑 OMEMO Fingerprint - -``` -F1085BD5 D359788F 05F936D8 3185A5BE -75B227FE DE4E6909 9433113B DFE4D722 -``` - -
- -
- 🔑 OTR Fingerprint - -``` -147B3144 705DADC6 E30F10D4 58EE07ED C9BFE1A6 -``` - -
- -
- -### Misc: - -👨‍💻 [humans.txt](/humans.txt) -
- -
-🎵 Now listening -

-Current Spotify Song -

-
-
- -
-👨‍💻 Doing something -

- Discord Presence -

-
- - diff --git a/content/blog/centos-qradar-integration/images/step1-2.webp b/content/blog/centos-qradar-integration/images/step1-2.webp deleted file mode 100644 index a777579..0000000 Binary files a/content/blog/centos-qradar-integration/images/step1-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step1.webp b/content/blog/centos-qradar-integration/images/step1.webp deleted file mode 100644 index f5a52e4..0000000 Binary files a/content/blog/centos-qradar-integration/images/step1.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step10-2.webp b/content/blog/centos-qradar-integration/images/step10-2.webp deleted file mode 100644 index afbf768..0000000 Binary files a/content/blog/centos-qradar-integration/images/step10-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step10.webp b/content/blog/centos-qradar-integration/images/step10.webp deleted file mode 100644 index 543ee5e..0000000 Binary files a/content/blog/centos-qradar-integration/images/step10.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step11-2.webp b/content/blog/centos-qradar-integration/images/step11-2.webp deleted file mode 100644 index a6daedf..0000000 Binary files a/content/blog/centos-qradar-integration/images/step11-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step11-3.webp b/content/blog/centos-qradar-integration/images/step11-3.webp deleted file mode 100644 index af1af52..0000000 Binary files a/content/blog/centos-qradar-integration/images/step11-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step11-4.webp b/content/blog/centos-qradar-integration/images/step11-4.webp deleted file mode 100644 index 62ba6fe..0000000 Binary files a/content/blog/centos-qradar-integration/images/step11-4.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step11-5.webp b/content/blog/centos-qradar-integration/images/step11-5.webp deleted file mode 100644 index c1f98e8..0000000 Binary files a/content/blog/centos-qradar-integration/images/step11-5.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step11.webp b/content/blog/centos-qradar-integration/images/step11.webp deleted file mode 100644 index cf768ff..0000000 Binary files a/content/blog/centos-qradar-integration/images/step11.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step12-2.webp b/content/blog/centos-qradar-integration/images/step12-2.webp deleted file mode 100644 index 39fa7d2..0000000 Binary files a/content/blog/centos-qradar-integration/images/step12-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step12-3.webp b/content/blog/centos-qradar-integration/images/step12-3.webp deleted file mode 100644 index f78c789..0000000 Binary files a/content/blog/centos-qradar-integration/images/step12-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step12.webp b/content/blog/centos-qradar-integration/images/step12.webp deleted file mode 100644 index 314d1d9..0000000 Binary files a/content/blog/centos-qradar-integration/images/step12.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step13-2.webp b/content/blog/centos-qradar-integration/images/step13-2.webp deleted file mode 100644 index 8a8d855..0000000 Binary files a/content/blog/centos-qradar-integration/images/step13-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step13-3.webp b/content/blog/centos-qradar-integration/images/step13-3.webp deleted file mode 100644 index 10c3284..0000000 Binary files a/content/blog/centos-qradar-integration/images/step13-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step13.webp b/content/blog/centos-qradar-integration/images/step13.webp deleted file mode 100644 index e1640d0..0000000 Binary files a/content/blog/centos-qradar-integration/images/step13.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step14-2.webp b/content/blog/centos-qradar-integration/images/step14-2.webp deleted file mode 100644 index 1ef3199..0000000 Binary files a/content/blog/centos-qradar-integration/images/step14-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step14.webp b/content/blog/centos-qradar-integration/images/step14.webp deleted file mode 100644 index 08d1173..0000000 Binary files a/content/blog/centos-qradar-integration/images/step14.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step15-2.webp b/content/blog/centos-qradar-integration/images/step15-2.webp deleted file mode 100644 index b6904ba..0000000 Binary files a/content/blog/centos-qradar-integration/images/step15-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step15-3.webp b/content/blog/centos-qradar-integration/images/step15-3.webp deleted file mode 100644 index f16b102..0000000 Binary files a/content/blog/centos-qradar-integration/images/step15-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step15.webp b/content/blog/centos-qradar-integration/images/step15.webp deleted file mode 100644 index 4c2d557..0000000 Binary files a/content/blog/centos-qradar-integration/images/step15.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step16-2.webp b/content/blog/centos-qradar-integration/images/step16-2.webp deleted file mode 100644 index 1185c08..0000000 Binary files a/content/blog/centos-qradar-integration/images/step16-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step16-3.webp b/content/blog/centos-qradar-integration/images/step16-3.webp deleted file mode 100644 index 0a6b6fc..0000000 Binary files a/content/blog/centos-qradar-integration/images/step16-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step16-4.webp b/content/blog/centos-qradar-integration/images/step16-4.webp deleted file mode 100644 index a859639..0000000 Binary files a/content/blog/centos-qradar-integration/images/step16-4.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step16-5.webp b/content/blog/centos-qradar-integration/images/step16-5.webp deleted file mode 100644 index e639c2b..0000000 Binary files a/content/blog/centos-qradar-integration/images/step16-5.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step16.webp b/content/blog/centos-qradar-integration/images/step16.webp deleted file mode 100644 index 5d2eeb3..0000000 Binary files a/content/blog/centos-qradar-integration/images/step16.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step2.webp b/content/blog/centos-qradar-integration/images/step2.webp deleted file mode 100644 index 208b134..0000000 Binary files a/content/blog/centos-qradar-integration/images/step2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step3.webp b/content/blog/centos-qradar-integration/images/step3.webp deleted file mode 100644 index 8f04e5e..0000000 Binary files a/content/blog/centos-qradar-integration/images/step3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step4.webp b/content/blog/centos-qradar-integration/images/step4.webp deleted file mode 100644 index a7fb78b..0000000 Binary files a/content/blog/centos-qradar-integration/images/step4.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step5-2.webp b/content/blog/centos-qradar-integration/images/step5-2.webp deleted file mode 100644 index 7647422..0000000 Binary files a/content/blog/centos-qradar-integration/images/step5-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step5-3.webp b/content/blog/centos-qradar-integration/images/step5-3.webp deleted file mode 100644 index f65c515..0000000 Binary files a/content/blog/centos-qradar-integration/images/step5-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step5-4.webp b/content/blog/centos-qradar-integration/images/step5-4.webp deleted file mode 100644 index 1778973..0000000 Binary files a/content/blog/centos-qradar-integration/images/step5-4.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step5.webp b/content/blog/centos-qradar-integration/images/step5.webp deleted file mode 100644 index d9a8fb4..0000000 Binary files a/content/blog/centos-qradar-integration/images/step5.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step6.webp b/content/blog/centos-qradar-integration/images/step6.webp deleted file mode 100644 index 4a06c59..0000000 Binary files a/content/blog/centos-qradar-integration/images/step6.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step7.webp b/content/blog/centos-qradar-integration/images/step7.webp deleted file mode 100644 index 5e09f38..0000000 Binary files a/content/blog/centos-qradar-integration/images/step7.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-2.webp b/content/blog/centos-qradar-integration/images/step8-2.webp deleted file mode 100644 index 5bde647..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-3.webp b/content/blog/centos-qradar-integration/images/step8-3.webp deleted file mode 100644 index d22d388..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-4.webp b/content/blog/centos-qradar-integration/images/step8-4.webp deleted file mode 100644 index d542566..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-4.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-5.webp b/content/blog/centos-qradar-integration/images/step8-5.webp deleted file mode 100644 index 6a194bd..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-5.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-6.webp b/content/blog/centos-qradar-integration/images/step8-6.webp deleted file mode 100644 index f65881e..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-6.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-7.webp b/content/blog/centos-qradar-integration/images/step8-7.webp deleted file mode 100644 index 990c1ca..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-7.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8-8.webp b/content/blog/centos-qradar-integration/images/step8-8.webp deleted file mode 100644 index 6802f74..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8-8.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step8.webp b/content/blog/centos-qradar-integration/images/step8.webp deleted file mode 100644 index a345489..0000000 Binary files a/content/blog/centos-qradar-integration/images/step8.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step9-2.webp b/content/blog/centos-qradar-integration/images/step9-2.webp deleted file mode 100644 index 5232f46..0000000 Binary files a/content/blog/centos-qradar-integration/images/step9-2.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step9-3.webp b/content/blog/centos-qradar-integration/images/step9-3.webp deleted file mode 100644 index 45a095a..0000000 Binary files a/content/blog/centos-qradar-integration/images/step9-3.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/images/step9.webp b/content/blog/centos-qradar-integration/images/step9.webp deleted file mode 100644 index 66f6d74..0000000 Binary files a/content/blog/centos-qradar-integration/images/step9.webp and /dev/null differ diff --git a/content/blog/centos-qradar-integration/index.md b/content/blog/centos-qradar-integration/index.md deleted file mode 100644 index cbe1cf7..0000000 --- a/content/blog/centos-qradar-integration/index.md +++ /dev/null @@ -1,323 +0,0 @@ ---- -title: "Setup CentOS for IBM QRadar CE Integration with VMware Workstation" -description: "" -summary: "This a guide to setup CentOS for IBM QRadar CE Integration with VMware Workstation and send logs to QRadar CE." -date: 2023-09-12T16:15:51+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["centos", "qradar", "siem", "vmware", "linux", "security", "tutorial"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -# weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/tree/main/content/blog/centos-qradar-integration/" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -## Overview - -This is a guide to setup CentOS for IBM QRadar CE Integration with VMware Workstation and send logs to QRadar CE. - -CentOS in this setup will act as a client that will be monitored by QRadar CE. - -## Prerequisites - -- [VMware Workstation Pro](https://www.vmware.com/products/workstation-pro/workstation-pro-evaluation.html) or [VMware Workstation Player](https://www.vmware.com/products/workstation-player/workstation-player-evaluation.html) -- [QRadar CE ISO](https://www.ibm.com/community/qradar/ce/) - -## Setup - -> **Note:** Before you start, make sure your **QRadar CE VM** is **already running**. - -### 1. Open VMware Workstation and click Open a Virtual Machine - -![Open a Virtual Machine](./images/step1.webp#center "Open a Virtual Machine") - -or you can click **File > Open...** or use the shortcut `Ctrl + O` - -![Open a Virtual Machine](./images/step1-2.webp#center "Open a Virtual Machine") - -### 2. Select the QRadar CE ISO file and click Open - -![Select the QRadar CE ISO file](./images/step2.webp#center "Select the QRadar CE ISO file") - -### 3. Name the VM and select the location to save the VM, then click Import - -![Name the VM and select the location to save the VM](./images/step3.webp#center "Name the VM and select the location to save the VM") - -### 4. Wait for the import to complete then click Edit virtual machine settings - -![Wait for the import to complete](./images/step4.webp#center "Wait for the import to complete") - -### 5. Change the virtual machine settings as needed - -In my setup, I changed the following settings: - -- Memory: 512 MB -- Processors: 1 -- Network Adapter: NAT - -> **Note:** We don't need that much memory and processors for this setup, because we will only use it as a dummy server/client. You can change the settings later if you need more memory and processors. - -Change the memory from **6 GB** to **512 MB** (or as needed) - -![memory](./images/step5.webp#center "memory") - -Change the processors from **2** to **1** (or as needed) - -![processors](./images/step5-2.webp#center "processors") - -Change the network adapter from **Bridged** to **NAT**, then click **OK** - -![network adapter](./images/step5-3.webp#center "network adapter") - -So the final settings will be like this: - -![final settings](./images/step5-4.webp#center "final settings") - -### 6. Power on the VM - -![Power on the VM](./images/step6.webp#center "Power on the VM") - -### 7. Wait for the VM to boot up and login with the root user and create a new password - -> **Note:** Don't forget the password that you created, because you will need it later. - -![login with root user](./images/step7.webp#center "login with root user") - -### 8. Configure the network - -Type `nmtui` to open the Network Manager Text User Interface - -![nmtui](./images/step8.webp#center "nmtui") - -- Select **Set system hostname** and press **Enter** - -![set system hostname](./images/step8-2.webp#center "set system hostname") - -- Set the hostname, in my setup I set it to `centos` and press **Enter** - -![set hostname](./images/step8-3.webp#center "set hostname") - -- Select **OK** and press **Enter** - -![select OK](./images/step8-4.webp#center "select OK") - -- Select **Quit** and press **Enter** - -![select Quit](./images/step8-5.webp#center "select Quit") - -- type `clear` to clear the screen - -- type `bash` to refresh the bash shell, so the hostname will be updated - -![refresh bash shell](./images/step8-6.webp#center "refresh bash shell") - -- Check the connection by typing `ping google.com` and press **Enter** - -![ping google.com](./images/step8-7.webp#center "ping google.com") - -- Check the IP address by typing `ip -br addr` and press **Enter** - -> **Note:** Take note of the IP address, because you will need it later. - -![ip -br addr](./images/step8-8.webp#center "ip -br addr") - -In my case, the IP address is `192.168.211.128` - -### 9. SSH to the VM centos - -You can use [PuTTY](https://www.putty.org/), [Windows Terminal](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab), [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10), [MobaXterm](https://mobaxterm.mobatek.net/) or any other [SSH](https://en.wikipedia.org/wiki/Secure_Shell "SSH @ Wikipedia") client you want. - -In my case, I use [Termius](https://termius.com/). - -- Set the details as needed - -![set the details](./images/step9.webp#center "set the details") - -- Type `ssh root@` and press **Enter** -- Type password that you created earlier and press **Enter** -- In Termius you can connect to the VM using **Quick Connect** feature, so you don't need to type the IP address and password every time you want to connect to the VM. - -![ssh root@](./images/step9-2.webp#center "ssh root@") - -- Voila! You are now connected to the VM - -![connected to the VM](./images/step9-3.webp#center "connected to the VM") - -### 10. Install the required packages and dependencies - -- Type `yum install audit` and press **Enter** - -![yum install audit](./images/step10.webp#center "yum install audit") - -- Type `y` if prompted and press **Enter** - -![y](./images/step10-2.webp#center "y") - -### 11. Configure the auditd service - -- Start the auditd service by typing `service start auditd` and press **Enter** -- If you get a warning, just type `systemctl daemon-reload` and press **Enter** -- Type `service start auditd` and press **Enter** again - -![service start auditd](./images/step11.webp#center "service start auditd") - -- Type `chkconfig auditd on` and press **Enter** to enable the auditd service - -![chkconfig auditd on](./images/step11-2.webp#center "chkconfig auditd on") - -- Type `service auditd status` and press **Enter** to check the status of the auditd service - -![service auditd status](./images/step11-3.webp#center "service auditd status") - -- If you encounter an error like this: - -> The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl. - -![service auditd status error](./images/step11-4.webp#center "service auditd status error") - -- Just type `systemctl start auditd` and press **Enter** to start the auditd service. - -![systemctl start auditd](./images/step11-5.webp#center "systemctl start auditd") - -### 12. Configure the audit rules - -- Type `vi /etc/audisp/plugins.d/syslog.conf` and press **Enter** to edit the syslog.conf file - -![vi /etc/audisp/plugins.d/syslog.conf](./images/step12.webp#center "vi /etc/audisp/plugins.d/syslog.conf") - -![vi /etc/audisp/plugins.d/syslog.conf](./images/step12-2.webp#center "vi /etc/audisp/plugins.d/syslog.conf") - -- Press `i` to enter the insert mode -- Change the content of the `syslog.conf` file to this: - - active = yes - - direction = out - - path = builtin_syslog - - type = builtin - - args = LOG_LOCAL6 - - format = string - -- So the final content of the `syslog.conf` file will be like this: - -![syslog.conf](./images/step12-3.webp#center "syslog.conf") - -- Press `Esc` to exit the insert mode -- Type `:wq` and press **Enter** to save and exit the file - -### 13. Configure the rsyslog service - -- Type `vi /etc/rsyslog.conf` and press **Enter** to edit the rsyslog.conf file - -![vi /etc/rsyslog.conf](./images/step13.webp#center "vi /etc/rsyslog.conf") - -- Press `shift + G` to go to the end of the file -- Press `O` to enter the insert mode and add this line at the end of the file: - - `*.* @:514` -- Check the IP address of the QRadar CE VM, in my case the IP address is `192.168.211.129` - -![rsyslog.conf](./images/step13-2.webp#center "rsyslog.conf") - -- Like this: - -![rsyslog.conf](./images/step13-3.webp#center "rsyslog.conf") - -- Press `Esc` to exit the insert mode -- Type `:wq` and press **Enter** to save and exit the file - -### 14. Restart the auditd and rsyslog services - -- Type `service auditd restart` and press **Enter** to restart the auditd service - -![service auditd restart](./images/step14.webp#center "service auditd restart") - -- Type `systemctl restart rsyslog` and press **Enter** to restart the rsyslog service - -![systemctl restart rsyslog](./images/step14-2.webp#center "systemctl restart rsyslog") - -### 15. Open the QRadar CE Dashboard on your browser and add a filter - -- Open your browser and go to `https://` -- Login with the username `admin` and your password -- Click **Log Activity** and click **Add Filter** - -![Log Activity](./images/step15.webp#center "Log Activity") - -- Add a filter with the following details: - - Parameter: `Source IP [Indexed]` - - Operator: `Equals` - - Value: ``, in my case the IP address is `192.168.211.128` - -![Add Filter](./images/step15-2.webp#center "Add Filter") - -- Change the View to **Real Time (streaming)** - -![Change the View](./images/step15-3.webp#center "Change the View") - -### 16. Test the log with add user in the centos VM - -- Type `useradd test` and press **Enter** to add a new user - -![useradd test](./images/step16.webp#center "useradd test") - -- If you get **Unknown log event**, you can restart the auditd and rsyslog services again -- Type `service auditd restart` and press **Enter** to restart the auditd service -- Type `systemctl restart rsyslog` and press **Enter** to restart the rsyslog service -- Type `useradd test` and press **Enter** again to add a new user -- Now you can see the activity log in the QRadar CE Dashboard -- You can also see the log in the `/var/log/audit/audit.log` file in the centos VM - -![useradd test](./images/step16-2.webp#center "useradd test") - -- Test deleting the user by typing `userdel test` and press **Enter** - -![userdel test](./images/step16-3.webp#center "userdel test") - -- Now you can see the activity log in the QRadar CE Dashboard, notice that the **Event Name** is contains user deletion activity. - -![userdel test](./images/step16-4.webp#center "userdel test") - -- You can try with other activities like `usermod`, `userpasswd`, `usergroup`, login and logout, change some configuration, etc. - -![other activity](./images/step16-5.webp#center "other activity") - -### 17. Voila! You have successfully setup CentOS for IBM QRadar CE Integration with VMware Workstation - -You can now explore the QRadar CE Dashboard and see the logs from your CentOS VM. - -## References - -- https://www.ibm.com/community/qradar/ce/ -- https://www.ibm.com/docs/en/SS42VS_7.4/pdf/b_siem_inst.pdf -- https://www.ibm.com/docs/en/SS42VS_7.4/pdf/b_qradar_system_notifications.pdf -- https://www.ibm.com/community/qradar/wp-content/uploads/sites/5/2020/03/QRadar_CE_Under_the_Radar_21Feb.pdf -- https://www.ibm.com/docs/en/qradar-on-cloud?topic=support-common-problems -- https://www.ibm.com/docs/en/qsip -- http://ftpmirror.your.org/pub/misc/ftp.software.ibm.com/software/security/products/qradar/documents/7.2.4/QLM/EN/b_qradar_system_notifications.pdf -- https://www.reddit.com/r/QRadar/comments/p5lfzz/best_strategy_for_monitor_linux_servers/ -- [Forwarding Syslogs from Linux Hosts to QRadar](https://wiki.secure-iss.com/Public/SOC/LinuxLogForwarding) -- [Sending Linux logs to QRadar (rsyslog.conf) by Jose Bravo](https://youtu.be/Dmf2iwRqATI?si=Ctf9DJd9CHVp4sHk) -- [Mastering Linux OS Integration with IBM QRadar: A Comprehensive Guide to Supercharge Your Security” by Ahmad Hassan Tariq](https://medium.com/@AhmadCyberZone.com/mastering-linux-os-integration-with-ibm-qradar-a-comprehensive-guide-to-supercharge-your-security-9d1be9eab9c9) -- Guide/learning material from [Infinite Learning HCAI Program](https://kampusmerdeka.kemdikbud.go.id/program/studi-independen/browse/863c3409-8b4e-4c96-9edd-71ee61e9fc41/7a22d773-4ea0-11ed-a45a-c2cca2f5088a) (I can't share the material/content directly, because it's confidential and belong to [Infinite Learning](https://www.infinitelearning.id/) and IBM Academy) \ No newline at end of file diff --git a/content/blog/hello-world/images/flow.svg b/content/blog/hello-world/images/flow.svg deleted file mode 100644 index 1ad5703..0000000 --- a/content/blog/hello-world/images/flow.svg +++ /dev/null @@ -1 +0,0 @@ -
Netlify Hosting
Netlify CI/CD Pipeline
GitHub Repo
Live Website
Trigger CI/CD from main branch
Build with Hugo
Deploy to Netlify
Commit Changes
Website Code
Push to Repo
\ No newline at end of file diff --git a/content/blog/hello-world/images/hello-world.gif b/content/blog/hello-world/images/hello-world.gif deleted file mode 100644 index 483462a..0000000 Binary files a/content/blog/hello-world/images/hello-world.gif and /dev/null differ diff --git a/content/blog/hello-world/images/sailor-saturn.webp b/content/blog/hello-world/images/sailor-saturn.webp deleted file mode 100644 index 1681080..0000000 Binary files a/content/blog/hello-world/images/sailor-saturn.webp and /dev/null differ diff --git a/content/blog/hello-world/index.md b/content/blog/hello-world/index.md index 454aad2..af09a7d 100644 --- a/content/blog/hello-world/index.md +++ b/content/blog/hello-world/index.md @@ -2,10 +2,10 @@ title: "Hello World!" description: "Yet another blog." summary: "Yet another blog." -date: 2023-09-03T21:48:44+07:00 +date: 2023-10-09T13:36:12+07:00 draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["random", "misc", "hello-world", "SSG"] +author: "Lemniskett" # ["Me", "You"] # multiple authors +tags: ["hello-world"] canonicalURL: "" showToc: true TocOpen: false @@ -36,52 +36,6 @@ cover: # appendFilePath: true # to append file path to Edit link --- -![Hello World!](images/hello-world.gif#center "Hello World in terminal") +Hello World! -Yeah, my another blog ~~again~~ (¬_¬) - -Previously I had a blog that used Static Site Generator (SSG) which is [Eleventy](https://11ty.dev), but now I have moved to other SSGs and what I'm using now is [Hugo](https://gohugo.io/). - -## Tech Stack - -- [Hugo](https://gohugo.io/) for the Static Site Generator (SSG) -- [Netlify](https://netlify.com) to host this site and for the CI/CD pipeline -- [GitHub](https://github.com) to host the source code - -## Flow - -![Flow](images/flow.svg#center "Flow") - -## Why SSG? - -I'm using SSG because it's easier to use and it's faster than using CMS (Content Management System) like [WordPress](https://wordpress.com/). I don't need to worry about the server, database, etc. I just need to write the content and the SSG will generate the static site for me. - -Static site generators offer several advantages that make them a compelling choice: - -- ***Efficiency***: SSGs pre-generate web pages, eliminating the need for server-side processing. This results in faster load times and reduced server resource consumption. -- ***Security***: Since there's no dynamic server-side code execution, the attack surface is smaller, making your website less vulnerable to security threats. -- ***Scalability***: Static sites can handle high levels of traffic without performance issues, making them suitable for projects of all sizes. -- ***Version Control***: Content and code can be easily managed with version control systems like Git, enabling collaborative development and content updates. -- ***Cost-Effectiveness***: Hosting static sites is often less expensive than dynamic sites because you don't need robust server infrastructure or database management. -- ***Simplicity***: SSGs encourage a straightforward development process. Content is created and organized in plain text files (e.g., Markdown), and the generator takes care of rendering them into HTML. -- ***Portability***: You can host static sites on a variety of platforms, making it easy to switch hosting providers or migrate your site. -- ***Maintainability***: Easy to maintain regarding software updates. -- ***Transparency***: Transparent in what is going on under the hood. Especially the open-source SSGs. - -## Why Hugo? - -I'm using Hugo because it's fast, simple, and easy to use. It's also written in Go, making it cross-platform. I'm avoiding the use of Node.js because it's bloated and slow. Additionally, some individuals have [security concerns related to JavaScript](https://yewtu.be/watch?v=pid5kmWXSj8), so I'm minimizing its usage as much as possible. This site also functions properly even when JavaScript is disabled. - -## Why Netlify? - -I'm using Netlify because it's free, easy to use, and it has a CI/CD pipeline. I'm using the free plan because I don't need the paid plan yet. I'm also using Netlify because it's easy to set up and it's easy to connect to GitHub. - -## Why Blogging? - -I started this blog to jot down things I've learned, mainly because I tend to forget stuff I picked up earlier. But hey, I've made it public, so you're welcome to give it a read and pick up things too. Sharing is caring, after all! ^^ - -Sorry if there are any mistakes in the blog/articles/writeups, you can [contact](/about/#contacts) me if you have any questions. - -Anyway, welcome to my blog and happy reading! ^^ - -![Thank You!](images/sailor-saturn.webp#center 'Hotaru "Sailor Saturn, Guardian of Silence" Tomoe from Sailor Moon') +This website is forked from [Hiiruki's Personal Website](https://github.com/hiiruki/hiiruki.dev) \ No newline at end of file diff --git a/content/blog/hugo-link-render-hook/index.md b/content/blog/hugo-link-render-hook/index.md deleted file mode 100644 index 3038a86..0000000 --- a/content/blog/hugo-link-render-hook/index.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: "Hugo Open External Link in New Tab and Add Rel Attribute" -description: "How to add a render hook for link in Hugo" -summary: "How to add a render hook for link in Hugo" -date: 2023-09-10T19:38:50+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["hugo", "render-hook", "goldmark"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -# weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -UseHugoToc: false -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/tree/main/content/blog" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -Hugo is using [goldmark](https://github.com/yuin/goldmark/) as its markdown renderer and has a [render hook](https://gohugo.io/templates/render-hooks/) feature. - -Previously, Hugo uses [Blackfriday](https://github.com/russross/blackfriday) as its markdown renderer in version below `v0.60.0`. Check the [changelog](https://github.com/gohugoio/hugo/releases/tag/v0.60.0) for more information. - -### Method 1 (No JavaScript) - -Make a file `layouts/_default/_markup/render-link.html` and add the following code: - -```html - - {{ .Text | safeHTML }} - -``` - -### Method 2 (JavaScript) - -Make a file `layouts/partials/extend_head.html` and add the following code: - -```html - -``` - -## References - -- https://gohugo.io/templates/render-hooks/ -- https://discourse.gohugo.io/t/open-external-links-in-new-tab-window/34000?page=2 -- https://agrimprasad.com/post/hugo-goldmark-markdown/ -- https://www.petanikode.com/hugo-render-hooks/ diff --git a/content/blog/port-forwarding-ngrok/images/cover.webp b/content/blog/port-forwarding-ngrok/images/cover.webp deleted file mode 100644 index 64df455..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/cover.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step1-2.webp b/content/blog/port-forwarding-ngrok/images/step1-2.webp deleted file mode 100644 index 73a6072..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step1-2.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step1.webp b/content/blog/port-forwarding-ngrok/images/step1.webp deleted file mode 100644 index a4d4e57..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step1.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step2-2.webp b/content/blog/port-forwarding-ngrok/images/step2-2.webp deleted file mode 100644 index d0ebdc6..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step2-2.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step2.webp b/content/blog/port-forwarding-ngrok/images/step2.webp deleted file mode 100644 index f894d62..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step2.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step3.webp b/content/blog/port-forwarding-ngrok/images/step3.webp deleted file mode 100644 index 499cde5..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step3.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step4.webp b/content/blog/port-forwarding-ngrok/images/step4.webp deleted file mode 100644 index 265e6c2..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step4.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step5.webp b/content/blog/port-forwarding-ngrok/images/step5.webp deleted file mode 100644 index 73594e5..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step5.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/images/step6.webp b/content/blog/port-forwarding-ngrok/images/step6.webp deleted file mode 100644 index 7a5db16..0000000 Binary files a/content/blog/port-forwarding-ngrok/images/step6.webp and /dev/null differ diff --git a/content/blog/port-forwarding-ngrok/index.md b/content/blog/port-forwarding-ngrok/index.md deleted file mode 100644 index 0088b7d..0000000 --- a/content/blog/port-forwarding-ngrok/index.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: "Port Forwarding with ngrok" -description: "Make your local server accessible from the internet" -summary: "Make your local server accessible from the internet" -date: 2023-09-15T07:37:05+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["ngrok", "port-forwarding", "linux", "ssh", "tutorial", "server", "tcp"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -# weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "images/cover.webp" # image path/url - alt: "" # alt text - caption: "ngrok illustration | https://ngrok.com/" # display caption under cover - relative: false # when using page bundles set this to true - hidden: false # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/tree/main/content/blog/port-forwarding-ngrok/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -## Introduction - -[Port forwarding](https://en.wikipedia.org/wiki/Port_forwarding "Port forwarding @ Wikipedia") is a technique that allows external devices to access a device that is behind a firewall, NAT, or private network. It is commonly used to make a local server accessible from the internet. - -[ngrok](https://ngrok.com/ "ngrok") is a tool that creates a secure tunnel to your local server. It is free to use, but you can also buy a paid plan to get more features. ngrok is available for Windows, macOS, Linux, Docker, FreeBSD, etc. - - -## Steps - -### 1. Download ngrok - -Download [ngrok](https://ngrok.com/download "Download ngrok") from the official website. - -You can also use `wget` to download ngrok directly to your server. This is useful if you want to use ngrok on a server that does not have a GUI. - -> **Note:** Install `wget` if it is not installed on your server. For Debian/Ubuntu, you can install it with `sudo apt install wget`. For CentOS/RHEL, you can install it with `sudo yum install wget`. - -{{< figure src="./images/step1.webp" caption="Install `wget` on CentOS" align="center" alt="Install wget on CentOS" >}} - -Download ngrok with this command: - -```bash -wget https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz --no-check-certificate -``` - -`--no-check-certificate` is used to bypass the SSL certificate check. This is useful if you are using a self-signed certificate. - -{{< figure src="./images/step1-2.webp" caption="ngrok download" align="center" alt="ngrok download" >}} - -### 2. Extract ngrok - -Extract it to a directory of your choice. I will use `/usr/local/bin` in this example. - -```bash -tar -xzf ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin -``` - -![ngrok extract](./images/step2.webp#center "ngrok extract") - -That command will extract the `ngrok` binary to `/usr/local/bin`. You can check if it is installed correctly by running `ngrok --version` - -![ngrok version](./images/step2-2.webp#center "ngrok version") - -### 3. Create an account - -Create an account on [ngrok](https://dashboard.ngrok.com/signup "Sign up for ngrok") and get your auth token from the [dashboard](https://dashboard.ngrok.com/get-started/your-authtoken "Your authtoken @ ngrok dashboard"). - -![ngrok dashboard](./images/step3.webp#center "ngrok auth token") - -### 4. Connect your account - -Connect your account by running `ngrok authtoken `. Replace `` with your auth token. - -or - -`ngrok config add-authtoken ` - -![ngrok connect](./images/step4.webp#center "ngrok connect") - -### 5. Start ngrok - -In this example, I want to make my local SSH server accessible from the internet. So, I will use port 22 for this example. - -Run `ngrok tcp 22` to start ngrok. - -![ngrok start](./images/step5.webp#center "ngrok start") - -### 6. Connect to your server - -Connect to your server with the ngrok URL. - -Domain: **0.tcp.ap.ngrok.io**
-Port: **11507** - -So the full command will be `ssh username@0.tcp.ap.ngrok.io -p 11507` - -{{< figure src="./images/step6.webp" caption="Remote SSH the **CentOS 7** using **Ubuntu 22.04.2 LTS (WSL)** with ngrok" align="center" alt="Install wget on CentOS" >}} - -> **Note:** The ngrok URL will change every time you start ngrok. So, you need to update the URL every time you start ngrok. - -## Conclusion - -That's it! Now you can make your local server accessible from the internet with ngrok. You can also use ngrok to make your local website accessible from the internet. Just use the right tunnel type for your server. - -For example, if you want to make your local website accessible from the internet, you can use `ngrok http 80` to start ngrok. Then you can access your website with the ngrok URL. You can also use ngrok to make your local SSH server accessible from the internet. Just use `ngrok tcp 22` to start ngrok. Then you can connect to your server with the ngrok URL. - -Further reading: [ngrok Tunnels](https://ngrok.com/docs/secure-tunnels/tunnels/ "ngrok Tunnels") - -## References - -- [ngrok documentation](https://ngrok.com/docs "ngrok Documentation") -- [Port forwarding - Wikipedia](https://en.wikipedia.org/wiki/Port_forwarding "Port forwarding - Wikipedia") -- [ngrok Tunnels](https://ngrok.com/docs/secure-tunnels/tunnels/ "ngrok Tunnels") diff --git a/content/blog/qradar-setup-vmware/images/step1.webp b/content/blog/qradar-setup-vmware/images/step1.webp deleted file mode 100644 index c595030..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step1.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step10.webp b/content/blog/qradar-setup-vmware/images/step10.webp deleted file mode 100644 index e9fffba..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step10.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step11-2.webp b/content/blog/qradar-setup-vmware/images/step11-2.webp deleted file mode 100644 index 6c735b4..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step11-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step11-3.webp b/content/blog/qradar-setup-vmware/images/step11-3.webp deleted file mode 100644 index 2a5d71c..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step11-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step11-4.webp b/content/blog/qradar-setup-vmware/images/step11-4.webp deleted file mode 100644 index 940f404..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step11-4.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step11-5.webp b/content/blog/qradar-setup-vmware/images/step11-5.webp deleted file mode 100644 index 1887c66..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step11-5.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step11.webp b/content/blog/qradar-setup-vmware/images/step11.webp deleted file mode 100644 index c99d1cc..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step11.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step12-2.webp b/content/blog/qradar-setup-vmware/images/step12-2.webp deleted file mode 100644 index d0605aa..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step12-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step12.webp b/content/blog/qradar-setup-vmware/images/step12.webp deleted file mode 100644 index ff81672..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step12.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step13-2.webp b/content/blog/qradar-setup-vmware/images/step13-2.webp deleted file mode 100644 index 62d905e..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step13-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step13-3.webp b/content/blog/qradar-setup-vmware/images/step13-3.webp deleted file mode 100644 index b824930..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step13-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step13.webp b/content/blog/qradar-setup-vmware/images/step13.webp deleted file mode 100644 index b617ecf..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step13.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step14.webp b/content/blog/qradar-setup-vmware/images/step14.webp deleted file mode 100644 index 6e180c4..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step14.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step15.webp b/content/blog/qradar-setup-vmware/images/step15.webp deleted file mode 100644 index 3a07a17..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step15.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step16-2.webp b/content/blog/qradar-setup-vmware/images/step16-2.webp deleted file mode 100644 index 01e65fc..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step16-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step16-3.webp b/content/blog/qradar-setup-vmware/images/step16-3.webp deleted file mode 100644 index 364cd07..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step16-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step16-4.webp b/content/blog/qradar-setup-vmware/images/step16-4.webp deleted file mode 100644 index 3d8ab0c..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step16-4.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step16.webp b/content/blog/qradar-setup-vmware/images/step16.webp deleted file mode 100644 index 60ebe24..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step16.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step17-2.webp b/content/blog/qradar-setup-vmware/images/step17-2.webp deleted file mode 100644 index 9c46177..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step17-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step17-3.webp b/content/blog/qradar-setup-vmware/images/step17-3.webp deleted file mode 100644 index 651af28..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step17-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step17.webp b/content/blog/qradar-setup-vmware/images/step17.webp deleted file mode 100644 index 640426d..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step17.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step18.webp b/content/blog/qradar-setup-vmware/images/step18.webp deleted file mode 100644 index 3f999f8..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step18.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step19.webp b/content/blog/qradar-setup-vmware/images/step19.webp deleted file mode 100644 index 9a3c9b0..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step19.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step2.webp b/content/blog/qradar-setup-vmware/images/step2.webp deleted file mode 100644 index a777579..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step20-2.webp b/content/blog/qradar-setup-vmware/images/step20-2.webp deleted file mode 100644 index 5dd61ba..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step20-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step20-3.webp b/content/blog/qradar-setup-vmware/images/step20-3.webp deleted file mode 100644 index 2a551d4..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step20-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step20-4.webp b/content/blog/qradar-setup-vmware/images/step20-4.webp deleted file mode 100644 index 08ba50c..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step20-4.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step20-5.webp b/content/blog/qradar-setup-vmware/images/step20-5.webp deleted file mode 100644 index e5a9165..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step20-5.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step20.webp b/content/blog/qradar-setup-vmware/images/step20.webp deleted file mode 100644 index 48a3441..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step20.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step21.webp b/content/blog/qradar-setup-vmware/images/step21.webp deleted file mode 100644 index 5ff07f6..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step21.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step22-2.webp b/content/blog/qradar-setup-vmware/images/step22-2.webp deleted file mode 100644 index aef99a4..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step22-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step22.webp b/content/blog/qradar-setup-vmware/images/step22.webp deleted file mode 100644 index a0d1e7f..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step22.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step23-2.webp b/content/blog/qradar-setup-vmware/images/step23-2.webp deleted file mode 100644 index 112eb1e..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step23-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step23-3.webp b/content/blog/qradar-setup-vmware/images/step23-3.webp deleted file mode 100644 index ac7dbb3..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step23-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step23.webp b/content/blog/qradar-setup-vmware/images/step23.webp deleted file mode 100644 index 666584e..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step23.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24-2.webp b/content/blog/qradar-setup-vmware/images/step24-2.webp deleted file mode 100644 index 87c1bc5..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24-3.webp b/content/blog/qradar-setup-vmware/images/step24-3.webp deleted file mode 100644 index 3c97885..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24-3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24-4.webp b/content/blog/qradar-setup-vmware/images/step24-4.webp deleted file mode 100644 index c254aa5..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24-4.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24-5.webp b/content/blog/qradar-setup-vmware/images/step24-5.webp deleted file mode 100644 index 2573037..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24-5.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24-6.webp b/content/blog/qradar-setup-vmware/images/step24-6.webp deleted file mode 100644 index 6ffb707..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24-6.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step24.webp b/content/blog/qradar-setup-vmware/images/step24.webp deleted file mode 100644 index 32db3b2..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step24.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step25-2.webp b/content/blog/qradar-setup-vmware/images/step25-2.webp deleted file mode 100644 index e3db822..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step25-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step25.webp b/content/blog/qradar-setup-vmware/images/step25.webp deleted file mode 100644 index 2ff6c0d..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step25.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step26-2.webp b/content/blog/qradar-setup-vmware/images/step26-2.webp deleted file mode 100644 index f2bedb5..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step26-2.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step26.webp b/content/blog/qradar-setup-vmware/images/step26.webp deleted file mode 100644 index 7a1e2fe..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step26.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step3.webp b/content/blog/qradar-setup-vmware/images/step3.webp deleted file mode 100644 index 208b134..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step3.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step4.webp b/content/blog/qradar-setup-vmware/images/step4.webp deleted file mode 100644 index 912dd54..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step4.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step5.webp b/content/blog/qradar-setup-vmware/images/step5.webp deleted file mode 100644 index a504d2d..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step5.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step6.webp b/content/blog/qradar-setup-vmware/images/step6.webp deleted file mode 100644 index 26ce3fd..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step6.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step7.webp b/content/blog/qradar-setup-vmware/images/step7.webp deleted file mode 100644 index b5677a9..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step7.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step8.webp b/content/blog/qradar-setup-vmware/images/step8.webp deleted file mode 100644 index a70e950..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step8.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/images/step9.webp b/content/blog/qradar-setup-vmware/images/step9.webp deleted file mode 100644 index 7f8ea6b..0000000 Binary files a/content/blog/qradar-setup-vmware/images/step9.webp and /dev/null differ diff --git a/content/blog/qradar-setup-vmware/index.md b/content/blog/qradar-setup-vmware/index.md deleted file mode 100644 index 876dd17..0000000 --- a/content/blog/qradar-setup-vmware/index.md +++ /dev/null @@ -1,380 +0,0 @@ ---- -title: "How to setup IBM QRadar CE on VMware Workstation" -description: "" -summary: "This is a guide to setup IBM QRadar Community Edition SIEM on VMware Workstation." -date: 2023-09-11T14:33:15+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["qradar", "siem", "vmware"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -# weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/tree/main/content" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -## Overview - -This is a guide to setup IBM QRadar Community Edition SIEM on VMware Workstation. - -IBM Qradar is a [security information and event management (SIEM)](https://en.wikipedia.org/wiki/Security_information_and_event_management "SIEM @ Wikipedia") product. It collects log data from an enterprise, its network devices, host assets and operating systems, applications, vulnerabilities, and user activities and behaviors. It also provides real-time monitoring, alerting, and offense management. - -I use VMware® Workstation 17 Pro (17.0.0 build-20800274) and QRadar CE ISO (QRadarCE733GA_v1_0.ova). - -**Software Requirements:** - -- [VMware Workstation Pro](https://www.vmware.com/products/workstation-pro/workstation-pro-evaluation.html) or [VMware Workstation Player](https://www.vmware.com/products/workstation-player/workstation-player-evaluation.html) -- [QRadar CE ISO](https://www.ibm.com/community/qradar/ce/) - -**Hardware requirements:** - -- Memory minimum requirements: 8 GB RAM or 10 GB w/applications -- Disk space minimum: 250 GB -- CPU: 2 cores (minimum) or 6 cores (recommended) -- One network adapter with access to the Internet is required -- A static public and private IP addresses is required for QRadar Community Edition -- The assigned hostname must be a fully qualified domain name - -## Steps - -### 1. Open VMware Workstation - -![Open VMware Workstation Pro](./images/step1.webp#center "Open VMware Workstation Pro") - -### 2. Click File > Open - -![File > Open](./images/step2.webp#center "File > Open") - -### 3. Select QRadar CE ISO (QRadarCE733GA_v1_0.ova) and click Open - -![Select QRadar CE ISO](./images/step3.webp#center "Select QRadar CE ISO") - -### 4. Name the VM and select the location to save the VM, then click Import - -![Importing VM](./images/step4.webp#center "Importing VM") - -### 5. Wait for the import to complete then click Memory under Devices - -![Click Memory under Devices](./images/step5.webp#center "Click Memory under Devices") - -### 6. Set the memory to 8 GB or 10 GB - -> **Note:** If installation fails, try increasing the memory to 10 GB or more. - -![Set the memory to 8 GB or 10 GB](./images/step6.webp#center "Set the memory to 8 GB or 10 GB") - -### 7. Set the Processors to 2 cores (minimum) or 6 cores (recommended) - -I set it to 4 cores. - -![Set the Processors to 2 cores (minimum) or 6 cores (recommended)](./images/step7.webp#center "Set the Processors to 2 cores (minimum) or 6 cores (recommended)") - -### 8. Set the Network Adapter from Bridged to NAT - -![Set the Network Adapter from Bridged to NAT](./images/step8.webp#center "Set the Network Adapter from Bridged to NAT") - -In VMware, the Bridged and NAT network adapter modes serve different purposes. Bridged mode allows the virtual machine (VM) to directly access the physical network as if it were a separate physical machine, receiving its own IP address and behaving as an independent device on the network. On the other hand, NAT (Network Address Translation) mode creates a private network within the host machine, allowing the VM to share the host's network connection. VMs in NAT mode use the host's IP address for external communication and are isolated from the external network, making them suitable for scenarios where the VMs need internet access but don't require direct interaction with external network devices. - -For example, If you are in a Cafe and your VMs is not connected to the internet, try changing the Network Adapter from Bridged to NAT. This will allow your VMs to share the host's network connection. - -Docs: [VMware Bridged vs NAT vs Host-Only Network](https://docs.vmware.com/en/VMware-Workstation-Pro/17/com.vmware.ws.using.doc/GUID-D9B0A52D-38A2-45D7-A9EB-987ACE77F93C.html) - -### 9. When you are done with the settings, click Power on this virtual machine - -![Power on this virtual machine](./images/step9.webp#center "Power on this virtual machine") - -### 10. Wait for the VM to boot up, and then login with the root user and create a new password - -> **Note:** Don't forget the password you set. You will need it later to login to the VM. Also, in linux when you type your password, it won't show anything. Just type it and press enter. - -![Login with the root user and create a new password](./images/step10.webp#center "Login with the root user and create a new password") - -### 11. Set the QRadar network settings to use IPv4 only - -Type `nmtui` to open the Network Manager - -![Type nmtui to open the Network Manager](./images/step11.webp#center "Type nmtui to open the Network Manager") - -Wait for the NetworkManager TUI to open. Then select **Edit a connection** and press **Enter** - -![select Edit a connection](./images/step11-2.webp#center "select Edit a connection") - -Then select **Edit** using the arrow key and press **Enter** - -![select Edit a connection](./images/step11-3.webp#center "select Edit a connection") - -Set the **IPv6 configuration** to **Ignore** and press **Enter** - -![Set the IPv6 configuration to Ignore](./images/step11-4.webp#center "Set the IPv6 configuration to Ignore") - -So that it looks like this - -![Set the IPv6 configuration to Ignore](./images/step11-5.webp#center "Set the IPv6 configuration to Ignore") - -Then select **OK** and press **Enter** - -### 12. Set the QRadar hostname - -After setting the network settings, back to the main menu and select **Set system hostname** and press **Enter** - -![Set system hostname](./images/step12.webp#center "Set system hostname") - -Then type the hostname you want to use. For example `qradar.yourname.com` and choose **OK** then press **Enter** - -![Set system hostname](./images/step12-2.webp#center "Set system hostname") - -Docs: [Recommended practices for hostname creation](https://www.ibm.com/support/pages/qradar-recommended-practices-hostname-creation) - -### 13. Reactivate the network settings - -After setting the hostname, back to the main menu and select **Activate a connection** and press **Enter** - -![Activate a connection](./images/step13.webp#center "Activate a connection") - -Select the network interface and press **Enter** - -Press **Enter** 2x in Deactivate option. - -![change Deactivate to Activate](./images/step13-2.webp#center "change Deactivate to Activate") - -### 14. Select Quit > OK and press Enter to save the changes - -![Select OK and press Enter to save the changes](./images/step14.webp#center "Select OK and press Enter to save the changes") - -### 15. Type `ls -l` to see the files in the current directory and type `./setup` to start the setup - -![./setup](./images/step15.webp#center "./setup") - -### 16. Accept the license agreement - -Press **Enter** to accept the license agreement - -![Press Enter to continue the setup](./images/step16.webp#center "Press Enter to continue the setup") - -Press **Space** to scroll down - -![Press Space to scroll down and type q to accept the license agreement](./images/step16-2.webp#center "Press Space to scroll down and type q to accept the license agreement") - -and type `q` to accept the license agreement - -![Press Space to scroll down and type q to accept the license agreement](./images/step16-3.webp#center "Press Space to scroll down and type q to accept the license agreement") - -Then press **Enter** to continue - -![Press Space to scroll down and type q to accept the license agreement](./images/step16-4.webp#center "Press Space to scroll down and type q to accept the license agreement") - -### 17. Type `Y` to install the QRadar CE - -![Type y to install the QRadar CE](./images/step17.webp#center "Type y to install the QRadar CE") - -Wait for the installation to complete. This will take a while. Approximately 30 minutes to 1 hour or more. Depends on your internet connection and your computer specs. - -![Wait for the installation to complete](./images/step17-2.webp#center "Wait for the installation to complete") - -Mine took around 40 minutes to complete. - -Rig: -- CPU: [Ryzen 5 4600H](https://www.amd.com/en/products/apu/amd-ryzen-5-4600h "Ryzen 5 4600H @ AMD") (6 cores, 12 threads) -- RAM: 16 GB (8GB dual channel) - -![Finish installing](./images/step17-3.webp#center "Finish installing") - -### 18. Set the password for the admin user to login to the QRadar CE web interface - -Type the password you want to use and press **Enter** - -> **Note:** Don't forget the password you set. You will need it later to login to the QRadar CE web interface. The password can be same as the VMs root password. - -![Set the password for the admin user to login to the QRadar CE web interface](./images/step18.webp#center "Set the password for the admin user to login to the QRadar CE web interface") - -### 19. Type `ip addr` or `ip a` to see the IP address of the VM - -![Type ip addr or ip a to see the IP address of the VM](./images/step19.webp#center "Type ip addr or ip a to see the IP address of the VM") - -Under the `ens33` interface, you will see the IP address of the VM. In my case, it's `192.168.211.129` - -> **Note:** The IP address of the VM will be different for everyone. - -### 20. After we get the IP address, we can now SSH to the VM - -You can use [PuTTY](https://www.putty.org/), [Windows Terminal](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab), [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10), [MobaXterm](https://mobaxterm.mobatek.net/) or any other [SSH](https://en.wikipedia.org/wiki/Secure_Shell "SSH @ Wikipedia") client you want. - -In my case, I use [Termius](https://termius.com/). - -- Open Termius and click **New Host** - -![Open Termius and click New Host](./images/step20.webp#center "Open Termius and click New Host") - -- Set the hostname to the IP address of the VM which is `192.168.211.129` and set the username to `root` and type the password you set earlier. You can also set the VM details if you want. In Termius you can set labels, groups, and tags to your VMs. - -![setup host](./images/step20-2.webp#center "setup host") - -- Connect to the VM - -You can use the Quick Connect button to connect to the VM without having to type the IP address, username, and password. - -![Connect to the VM](./images/step20-3.webp#center "Connect to the VM") - -- Accept the fingerprint - -Click **Add and continue** - -![Accept the fingerprint](./images/step20-4.webp#center "Accept the fingerprint") - -- You are now connected to the VM - -![You are now connected to the VM](./images/step20-5.webp#center "You are now connected to the VM") - -### 21. Check the Tomcat service status - -Type `systemctl status tomcat` to check the Tomcat service status - -Docs: -- [Tomcat Systemd](https://www.dogtagpki.org/wiki/Tomcat_Systemd "Tomcat Systemd Docs @ Dogtag PKI") -- [Tomcat Apache](https://tomcat.apache.org/tomcat-8.5-doc/index.html "Tomcat Docs @ Apache") - -![Check the Tomcat service status](./images/step21.webp#center "Check the Tomcat service status") - -### 22. Run this following command to update the QRadar CE - -![Alert in IBM QRadar Website](./images/step22.webp#center "alert in IBM QRadar Website") - -In the IBM QRadar CE ISO, there is a bug that prevents the QRadar CE from updating. QRadar developers has recently identified a defect in the product licensing function, which may cause the deployment to stop functioning. We need to run this following command. - -More info: [UPDATED: A QRadar deploy changes on 31 December 2020 can impact product functionality](https://www.ibm.com/support/pages/node/6395080 "UPDATED: A QRadar deploy changes on 31 December 2020 can impact product functionality") - -Copy and paste this command to the VM and press **Enter** - -```bash -if [ -f /opt/qradar/ecs/license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/qradar/ecs/license.txt ; fi ; if [ -f /opt/ibm/si/services/ecs-ec-ingress/current/eventgnosis/license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/ibm/si/services/ecs-ec-ingress/current/eventgnosis/license.txt ; fi ; if [ -f /opt/ibm/si/services/ecs-ep/current/eventgnosis/license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/ibm/si/services/ecs-ep/current/eventgnosis/license.txt ; fi ; if [ -f /opt/ibm/si/services/ecs-ec/current/eventgnosis/license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/ibm/si/services/ecs-ec/current/eventgnosis/license.txt ; fi ; if [ -f /usr/eventgnosis/ecs/license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /usr/eventgnosis/ecs/license.txt ; fi ; if [ -f /opt/qradar/conf/templates/ecs_license.txt ] ; then echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/qradar/conf/templates/ecs_license.txt ; fi -``` - -
-Command break down - -This is a complex shell command written in Bash scripting language. Let's break down what it does step by step: - -1. `if [ -f /opt/qradar/ecs/license.txt ] ; then ... ; fi`: - - This part of the command checks if a file named `license.txt` exists in the directory `/opt/qradar/ecs/`. - - If the file exists, the subsequent command enclosed by `then` and `fi` is executed. - -2. `echo -n "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20" > /opt/qradar/ecs/license.txt`: - - If the file `/opt/qradar/ecs/license.txt` exists, this command overwrites the contents of that file with the given text: "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20". - - The `-n` flag with `echo` is used to suppress the trailing newline character, so the text is written without a newline at the end. - -The same logic is repeated for several other paths, checking for the existence of `license.txt` files and overwriting their contents if they exist. The paths being checked are as follows: - -- `/opt/ibm/si/services/ecs-ec-ingress/current/eventgnosis/license.txt` -- `/opt/ibm/si/services/ecs-ep/current/eventgnosis/license.txt` -- `/opt/ibm/si/services/ecs-ec/current/eventgnosis/license.txt` -- `/usr/eventgnosis/ecs/license.txt` -- `/opt/qradar/conf/templates/ecs_license.txt` - -In each case, if the respective `license.txt` file exists, it's overwritten with the same text: "QRadar:Q1 Labs Inc.:0007634bda1e2:WnT9X7BDFOgB1WaXwokODc:12/31/20". - -This command seems to be updating license files for different components or services, ensuring that they all have the same license information. The provided information appears to be related to QRadar, likely a license key or information related to a software product. - -

- -![Run the command to update the QRadar CE](./images/step22-2.webp#center "Run the command to update the QRadar CE") - -### 23. Open the QRadar CE web interface in your browser - -Open your browser and type the IP address of the VM. In my case, it's `https://192.168.211.129` - -> **Note:** Don't forget to use `https://` instead of `http://` because the QRadar CE web interface uses HTTPS. - -- Click **Advanced...** and click **Accept the Risk and Continue** - -![Click Advanced... and click Accept the Risk and Continue](./images/step23.webp#center "Click Advanced... and click Accept the Risk and Continue") - -- Login with the username `admin` and the password you set earlier - -![Login with the username admin and the password you set earlier](./images/step23-2.webp#center "Login with the username admin and the password you set earlier") - -- Accept the EULA - -![Accept the EULA](./images/step23-3.webp#center "Accept the EULA") - -### 24. Configure the Flow Sources - -- Click the hamburger menu icon in the top left corner of the QRadar Console. - -![hamburger menu](./images/step24.webp#center "Hamburger menu") - -- Click **Admin** - -![Click Admin](./images/step24-2.webp#center "Click Admin") - -- Scroll down and click **Flow Sources** - -![Click Flow Sources](./images/step24-3.webp#center "Click Flow Sources") - -- Click **Add** - -![Click Add](./images/step24-4.webp#center "Click Add") - -- Wait for the form to load and set the **Flow Source Name** to `qradar_network` and set the **Flow Source Type** to `Network Interface` and click **Save** - -![Set the Flow Source Name to qradar_network and set the Flow Source Type to Network Interface and click Save](./images/step24-5.webp#center "Set the Flow Source Name to qradar_network and set the Flow Source Type to Network Interface and click Save") - -- So that it looks like this - -![So that it looks like this](./images/step24-6.webp#center "So that it looks like this") - -### 25. Deploy the changes - -- Back to the admin page and click **Deploy Changes** - -![Back to the admin page and click Deploy Changes](./images/step25.webp#center "Back to the admin page and click Deploy Changes") - -- Click **Continue** if you are sure you want to deploy the changes - -![Click Continue if you are sure you want to deploy the changes](./images/step25-2.webp#center "Click Continue if you are sure you want to deploy the changes") and wait for the changes to be deployed. This will take a while. Approximately 2-5 minutes or more. - -### 26. Check the Network Activity tab, and if there are any logs, it means the QRadar CE is working - -- **Log Activity** - -![Log Activity](./images/step26.webp#center "Log Activity") - -**Network Activity** - -![Network Activity](./images/step26-2.webp#center "Network Activity") - -## Congratulations! You have successfully setup IBM QRadar CE on VMware Workstation - -## References: - -- https://www.ibm.com/community/qradar/ce/ -- https://www.ibm.com/docs/en/SS42VS_7.4/pdf/b_siem_inst.pdf -- https://www.ibm.com/docs/en/SS42VS_7.4/pdf/b_qradar_system_notifications.pdf -- https://www.ibm.com/community/qradar/wp-content/uploads/sites/5/2020/03/QRadar_CE_Under_the_Radar_21Feb.pdf -- https://www.ibm.com/docs/en/qradar-on-cloud?topic=support-common-problems -- https://www.ibm.com/docs/en/qsip -- http://ftpmirror.your.org/pub/misc/ftp.software.ibm.com/software/security/products/qradar/documents/7.2.4/QLM/EN/b_qradar_system_notifications.pdf -- [Tutorial: QRadar CE SIEM - Installation and Configuration (Complete Steps) by Semi Yulianto](https://youtu.be/DCd5f4VFDdk?si=ou0iQCT50kZdDBBM) -- Guide/learning material from [Infinite Learning HCAI Program](https://kampusmerdeka.kemdikbud.go.id/program/studi-independen/browse/863c3409-8b4e-4c96-9edd-71ee61e9fc41/7a22d773-4ea0-11ed-a45a-c2cca2f5088a) (I can't share the material/content directly, because it's confidential and belong to [Infinite Learning](https://www.infinitelearning.id/) and IBM Academy) diff --git a/content/blog/qradar-system-time/images/step1-2.webp b/content/blog/qradar-system-time/images/step1-2.webp deleted file mode 100644 index 87c1bc5..0000000 Binary files a/content/blog/qradar-system-time/images/step1-2.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step1-3.webp b/content/blog/qradar-system-time/images/step1-3.webp deleted file mode 100644 index dac69c3..0000000 Binary files a/content/blog/qradar-system-time/images/step1-3.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step1.webp b/content/blog/qradar-system-time/images/step1.webp deleted file mode 100644 index 32db3b2..0000000 Binary files a/content/blog/qradar-system-time/images/step1.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step2.webp b/content/blog/qradar-system-time/images/step2.webp deleted file mode 100644 index 616e94e..0000000 Binary files a/content/blog/qradar-system-time/images/step2.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step3.webp b/content/blog/qradar-system-time/images/step3.webp deleted file mode 100644 index fd41016..0000000 Binary files a/content/blog/qradar-system-time/images/step3.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step4.webp b/content/blog/qradar-system-time/images/step4.webp deleted file mode 100644 index 129f6f5..0000000 Binary files a/content/blog/qradar-system-time/images/step4.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step5.webp b/content/blog/qradar-system-time/images/step5.webp deleted file mode 100644 index eb59a45..0000000 Binary files a/content/blog/qradar-system-time/images/step5.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step6.webp b/content/blog/qradar-system-time/images/step6.webp deleted file mode 100644 index 9b62712..0000000 Binary files a/content/blog/qradar-system-time/images/step6.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step7-2.webp b/content/blog/qradar-system-time/images/step7-2.webp deleted file mode 100644 index 662f90b..0000000 Binary files a/content/blog/qradar-system-time/images/step7-2.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step7-3.webp b/content/blog/qradar-system-time/images/step7-3.webp deleted file mode 100644 index 8cd9d8b..0000000 Binary files a/content/blog/qradar-system-time/images/step7-3.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step7-4.webp b/content/blog/qradar-system-time/images/step7-4.webp deleted file mode 100644 index ed2dc05..0000000 Binary files a/content/blog/qradar-system-time/images/step7-4.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/images/step7.webp b/content/blog/qradar-system-time/images/step7.webp deleted file mode 100644 index 0fefb5b..0000000 Binary files a/content/blog/qradar-system-time/images/step7.webp and /dev/null differ diff --git a/content/blog/qradar-system-time/index.md b/content/blog/qradar-system-time/index.md deleted file mode 100644 index 14cda8a..0000000 --- a/content/blog/qradar-system-time/index.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: "Updating the system time on the QRadar Console" -description: "Configure the system time on your QRadar® Console by setting the time manually, or by using NTP servers. The QRadar Console synchronizes its system time with the managed hosts in your deployment. " -summary: "Configure the system time on your QRadar® Console by setting the time manually, or by using NTP servers. The QRadar Console synchronizes its system time with the managed hosts in your deployment. " -date: 2023-09-16T17:09:14+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["qradar", "siem", "time", "ntp"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -# weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/tree/main/content" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -## Overview - -This guide describes how to configure the system time on your QRadar® Console by setting the time manually, or by using NTP servers. The QRadar Console synchronizes its system time with the managed hosts in your deployment. - -## Steps - -### 1. Click the Admin tab - -- Click the hamburger menu (☰) in the top left corner of the QRadar Console. - -![hamburger menu](./images/step1.webp#center "hamburger menu") - -- Click **Admin** - -![admin](./images/step1-2.webp#center "admin menu") - -- Or you can just click the **Admin** tab - -![admin tab](./images/step1-3.webp#center "admin tab") - -### 2. In the System Configuration section, click the System and License Management icon. - -![system and license management](./images/step2.webp#center "system and license management") - -### 3. From the Display menu, select Systems. - -Select the relevant host - -![systems](./images/step3.webp#center "systems") - -### 4. From the Actions menu, select View and Manage System, and then click the System Time tab. - -![system time](./images/step4.webp#center "system time") - -### 5. Select a time zone from the Time Zone menu. - -You can configure only the time zone on a managed host. The system time is synchronized with the QRadar Console but if the managed host is in a different time zone, then you can change to that time zone. - -![time zone](./images/step5.webp#center "time zone") - -### 6. Set the time manually. - -- Select the **Set time manually** check box. -- Change the **Date** and **Time** to the correct values. - -![set time manually](./images/step6.webp#center "set time manually") - -### 7. Set the time by using NTP servers. - -- Select the **Specify NTP servers** check box. -- Clik the plus icon (**+**) to add an NTP server. -- You can use NTP server pools, such as `pool.ntp.org` or `time.google.com`. - -![ntp servers](./images/step7.webp#center "ntp servers") - -- The final result should look like this: - -![ntp servers](./images/step7-2.webp#center "ntp servers") - -- Click **Save**. - -The first NTP server is the primary server. If the primary server is not available, the secondary server is used. If the secondary server is not available, the tertiary server is used. If the tertiary server is not available, the system time is not synchronized. - -- If prompted to restart services, click **OK**. - -![restart services](./images/step7-3.webp#center "restart services") - -![restart services](./images/step7-4.webp#center "restart services") - -## Conclusion - -You have successfully configured the system time on your QRadar® Console by setting the time manually, or by using NTP servers. The QRadar Console synchronizes its system time with the managed hosts in your deployment. - -## References - -- [Updating the system time on the QRadar Console](https://www.ibm.com/docs/en/qsip/7.5?topic=configuration-update-system-time) -- [NTP Pool Project](https://www.ntppool.org/en/) -- [Google Public NTP](https://developers.google.com/time) -- [NTP Servers](https://www.ntppool.org/en/use.html) -- [NTP Servers in Asia](https://www.ntppool.org/zone/asia) -- [Indonesia NTP Servers](https://www.ntppool.org/zone/id) -- [NTP @ Wikipedia](https://en.wikipedia.org/wiki/Network_Time_Protocol "NTP @ Wikipedia") - diff --git a/content/privacy.md b/content/privacy.md index e8cab99..05c7b03 100644 --- a/content/privacy.md +++ b/content/privacy.md @@ -5,12 +5,12 @@ hidemeta: true --- - This website was created with [Hugo](https://gohugo.io/) a [Static Site Generator (SSG)](https://en.wikipedia.org/wiki/Static_site_generator "Static Site Generator (SSG) @ Wikipedia") written in [Go](https://go.dev/). It does not use cookies of any kind. This site uses `localStorage`[^1] for the purpose of switching between light and dark themes for UI/UX, with no interaction with the server, only on the client side. There are no forms or other mechanisms that process personal data. -- This Website is hosted in [Netlify](https://www.netlify.com/). Netlify may collect user personal information from visitors to this website, including logs of visitor IP addresses, to comply with legal obligations, and to maintain the security and integrity of the website and the service. See the Netlify Privacy Statement for details.[^2] +- This Website is hosted in [Cloudflare](https://www.cloudflare.com/). Cloudflare may collect user personal information from visitors to this website, including logs of visitor IP addresses, to comply with legal obligations, and to maintain the security and integrity of the website and the service. See the Cloudflare Privacy Polocy for details.[^2] - All external links open in a new tab and by default are told not to send a referrer in the header. I do not use an anonymizing service so that you will know exactly where the link will take you to. Also, I use `noopener` attribute, which prevents the opening page to gain any kind of access to the original page. - If any external links are missing the `rel="external nofollow noopener noreferrer"`[^3] let me know and I'll update it ASAP. - I will never add user tracking/analytics of any type because I simply do not care. I don't care how popular the site is or isn't - _it exists for my personal satisfaction_. - Apart from this, no data is collected, stored or evaluated. No ads, no tracking/analytics, just my articles to read. [^1]: [MDN Web Docs: Web Storage API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API "Web Storage API @ MDN Web Docs") & [MDN Web Docs: Local Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage "localStorage @ MDN Web Docs") -[^2]: [Netlify's General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA)](https://www.netlify.com/gdpr-ccpa/) & [Netlify's Privacy Statement](https://www.netlify.com/privacy/) +[^2]: [Cloudflare's Privacy Policy](https://www.cloudflare.com/privacypolicy/) [^3]: [MDN Web Docs: Link types](https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types "Link types @ MDN Web Docs") \ No newline at end of file diff --git a/content/writeups/_index.md b/content/writeups/_index.md deleted file mode 100644 index cf3ea94..0000000 --- a/content/writeups/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Writeups -summary: Collection of writeups written on different problems/challenges/machines. -description: "Collection of writeups written on different problems/challenges/machines.
-Classified according to the platform they were hosted on." -type: list -ShowRssButtonInSectionTermList: true -ShowFullTextinRSS: true ---- diff --git a/content/writeups/google-cloudskillsboost/GSP101/images/firewall.webp b/content/writeups/google-cloudskillsboost/GSP101/images/firewall.webp deleted file mode 100644 index 721bae5..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP101/images/firewall.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP101/images/lab_variable.webp b/content/writeups/google-cloudskillsboost/GSP101/images/lab_variable.webp deleted file mode 100644 index 562440e..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP101/images/lab_variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP101/images/vm_create.webp b/content/writeups/google-cloudskillsboost/GSP101/images/vm_create.webp deleted file mode 100644 index a298d2e..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP101/images/vm_create.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP101/index.md b/content/writeups/google-cloudskillsboost/GSP101/index.md deleted file mode 100644 index 7f7ca25..0000000 --- a/content/writeups/google-cloudskillsboost/GSP101/index.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: "[GSP101] Google Cloud Essential Skills: Challenge Lab" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-26T11:30:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp101", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 1 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/writeups/GSP101/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP101 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 45 minutes -- Difficulty: Intermediate -- Price: 5 Credits - -Lab: [GSP101](https://www.cloudskillsboost.google/focuses/1734?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -Your company is ready to launch a brand new product! Because you are entering a totally new space, you have decided to deploy a new website as part of the product launch. The new site is complete, but the person who built the new site left the company before they could deploy it. - -## Your challenge - -Your challenge is to deploy the site in the public cloud by completing the tasks below. You will use a simple Apache web server as a placeholder for the new site in this exercise. Good luck! - -1. Create a Compute Engine instance, add necessary firewall rules. - - - In the **Cloud Console**, click the **Navigation menu** > **Compute Engine** > **VM Instances**. - - Click **Create instance**. - - Set the following values, leave all other values at their defaults: - - | Property | Value (type value or select option as specified) | - | --- | --- | - | Name | `INSTANCE_NAME` | - | Zone | `COMPUTE_ZONE` | - - ![Lab Variable](./images/lab_variable.webp#center) - - ![VM Create](./images/vm_create.webp#center) - - - Under **Firewall** check **Allow HTTP traffic**. - - ![Firewall](./images/firewall.webp#center) - - - Click **Create**. - -2. Configure Apache2 Web Server in your instance. - - - In the **Cloud Console**, click the **Navigation menu** > **Compute Engine** > **VM Instances**. - - Click on the SSH button next to `INSTANCE_NAME` instance. - - Run the following command: - - ```bash - sudo su - - ``` - - then run: - - ```bash - apt-get update - apt-get install apache2 -y - - service --status-all - ``` - -3. Test your server. - - - In the **Cloud Console**, click the **Navigation menu** > **Compute Engine** > **VM Instances**. - - Access the VM using an https address. Check that your URL is http:// EXTERNAL_IP and not https:// EXTERNAL_IP - - Verify **Apache2 Debian Default Page** showed up. - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/Ol0IAaeZbMNmToILKVne%2BkFlHoAu%2BZtH%2BErA8jO7m%2Bc%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP301/index.md b/content/writeups/google-cloudskillsboost/GSP301/index.md deleted file mode 100644 index 6eb4a53..0000000 --- a/content/writeups/google-cloudskillsboost/GSP301/index.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "[GSP301] Deploy a Compute Instance with a Remote Startup Script" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-22T04:13:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp301", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 2 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP301/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP301 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour
-- Difficulty: Intermediate
-- Price: 5 Credits - -Lab: [GSP301](https://www.cloudskillsboost.google/focuses/1735?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -You have been given the responsibility of managing the configuration of your organization's Google Cloud virtual machines. You have decided to make some changes to the framework used for managing the deployment and configuration machines - you want to make it easier to modify the startup scripts used to initialize a number of the compute instances. Instead of storing startup scripts directly in the instances' metadata, you have decided to store the scripts in a Cloud Storage bucket and then configure the virtual machines to point to the relevant script file in the bucket. - -A basic bash script that installs the Apache web server software called `install-web.sh` has been provided for you as a sample startup script. You can download this from the Student Resources links on the left side of the page. - -## Your challenge - -Configure a Linux Compute Engine instance that installs the Apache web server software using a remote startup script. In order to confirm that a compute instance Apache has successfully installed, the Compute Engine instance must be accessible via HTTP from the internet. - -### Task 1. Confirm that a Google Cloud Storage bucket exists that contains a file - -Go to cloud shell and run the following command: - -```bash -gsutil mb gs://$DEVSHELL_PROJECT_ID -gsutil cp gs://sureskills-ql/challenge-labs/ch01-startup-script/install-web.sh gs://$DEVSHELL_PROJECT_ID -``` - -### Task 2. Confirm that a compute instance has been created that has a remote startup script called install-web.sh configured - -```bash -gcloud compute instances create example-instance --zone=us-central1-a --tags=http-server --metadata startup-script-url=gs://$DEVSHELL_PROJECT_ID/install-web.sh -``` - -### Task 3. Confirm that a HTTP access firewall rule exists with tag that applies to that virtual machine - -```bash -gcloud compute firewall-rules create allow-http --target-tags http-server --source-ranges 0.0.0.0/0 --allow tcp:80 -``` - -### Task 4. Connect to the server ip-address using HTTP and get a non-error response - -After firewall creation (Task 3) just wait and then check the score - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/%2FaI3EMiHeGZc46u89ueTTAEgmRSGj5krSwhpzllr88w%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install.webp b/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install.webp deleted file mode 100644 index 23c3746..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install2.webp b/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install2.webp deleted file mode 100644 index d8db571..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/IIS_install2.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_extension.webp b/content/writeups/google-cloudskillsboost/GSP303/images/RDP_extension.webp deleted file mode 100644 index f1d1585..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_extension.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_login.webp b/content/writeups/google-cloudskillsboost/GSP303/images/RDP_login.webp deleted file mode 100644 index 1ec4c1e..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_login.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-bastionhost_creds.webp b/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-bastionhost_creds.webp deleted file mode 100644 index 0228398..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-bastionhost_creds.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-securehost_creds.webp b/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-securehost_creds.webp deleted file mode 100644 index 2da6c9d..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/RDP_vm-securehost_creds.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-bastionhost.webp b/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-bastionhost.webp deleted file mode 100644 index 41a876e..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-bastionhost.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-securehost.webp b/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-securehost.webp deleted file mode 100644 index 68a029b..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP303/images/VM_instances_vm-securehost.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP303/index.md b/content/writeups/google-cloudskillsboost/GSP303/index.md deleted file mode 100644 index 7f47ed5..0000000 --- a/content/writeups/google-cloudskillsboost/GSP303/index.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: "[GSP303] Configure Secure RDP using a Windows Bastion Host" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-24T08:38:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp303", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "rdp", "bastion", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 3 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP303/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP303 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -Time: 1 hour
-Difficulty: Intermediate
-Price: 5 Credits - -Lab: [GSP303](https://www.cloudskillsboost.google/focuses/1737?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -Your company has decided to deploy new application services in the cloud and your assignment is developing a secure framework for managing the Windows services that will be deployed. You will need to create a new VPC network environment for the secure production Windows servers. - -Production servers must initially be completely isolated from external networks and cannot be directly accessed from, or be able to connect directly to, the internet. In order to configure and manage your first server in this environment, you will also need to deploy a bastion host, or jump box, that can be accessed from the internet using the Microsoft Remote Desktop Protocol (RDP). The bastion host should only be accessible via RDP from the internet, and should only be able to communicate with the other compute instances inside the VPC network using RDP. - -Your company also has a monitoring system running from the default VPC network, so all compute instances must have a second network interface with an internal only connection to the default VPC network. - -## Your challenge - -Deploy the secure Windows machine that is not configured for external communication inside a new VPC subnet, then deploy the Microsoft Internet Information Server on that secure machine. - -### Task 1. Create the VPC network - -1. Create a new VPC network called `securenetwork` - - Go to cloud shell and run the following command: - - ```bash - gcloud compute networks create securenetwork --project=$DEVSHELL_PROJECT_ID --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional - ``` - -2. Then create a new VPC subnet inside `securenetwork` - - ```bash - gcloud compute networks subnets create secure-subnet --project=$DEVSHELL_PROJECT_ID --range=10.0.0.0/24 --stack-type=IPV4_ONLY --network=securenetwork --region=us-central1 - ``` - -3. Once the network and subnet have been configured, configure a firewall rule that allows inbound RDP traffic (`TCP port 3389`) from the internet to the bastion host. This rule should be applied to the appropriate host using network tags. - - ```bash - gcloud compute --project=$DEVSHELL_PROJECT_ID firewall-rules create secuer-firewall --direction=INGRESS --priority=1000 --network=securenetwork --action=ALLOW --rules=tcp:3389 --source-ranges=0.0.0.0/0 --target-tags=rdp - ``` - -### Task 2. Deploy your Windows instances and configure user passwords - -1. Deploy a Windows 2016 server instance called `vm-securehost` with two network interfaces. -2. Configure the first network interface with an internal only connection to the new VPC subnet, and the second network interface with an internal only connection to the default VPC network. This is the secure server. - - ```bash - gcloud compute instances create vm-securehost --project=$DEVSHELL_PROJECT_ID --zone=us-central1-a --machine-type=n1-standard-2 --network-interface=stack-type=IPV4_ONLY,subnet=secure-subnet,no-address --network-interface=stack-type=IPV4_ONLY,subnet=default,no-address --metadata=enable-oslogin=true --maintenance-policy=MIGRATE --provisioning-model=STANDARD --tags=rdp --create-disk=auto-delete=yes,boot=yes,device-name=vm-securehost,image=projects/windows-cloud/global/images/windows-server-2016-dc-v20230510,mode=rw,size=150,type=projects/$DEVSHELL_PROJECT_ID/zones/us-central1-a/diskTypes/pd-standard --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --labels=goog-ec-src=vm_add-gcloud --reservation-affinity=any - ``` - -3. Install a second Windows 2016 server instance called `vm-bastionhost` with two network interfaces. -4. Configure the first network interface to connect to the new VPC subnet with an ephemeral public (external NAT) address, and the second network interface with an internal only connection to the default VPC network. This is the jump box or bastion host. - - ```bash - gcloud compute instances create vm-bastionhost --project=$DEVSHELL_PROJECT_ID --zone=us-central1-a --machine-type=n1-standard-2 --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=secure-subnet --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=default --metadata=enable-oslogin=true --maintenance-policy=MIGRATE --provisioning-model=STANDARD --tags=rdp --create-disk=auto-delete=yes,boot=yes,device-name=vm-securehost,image=projects/windows-cloud/global/images/windows-server-2016-dc-v20230510,mode=rw,size=150,type=projects/$DEVSHELL_PROJECT_ID/zones/us-central1-a/diskTypes/pd-standard --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --labels=goog-ec-src=vm_add-gcloud --reservation-affinity=any - ``` - -5. After your Windows instances have been created, create a user account and reset the Windows passwords in order to connect to each instance. -6. The following `gcloud` command creates a new user called `app-admin` and resets the password for a host called `vm-bastionhost` and `vm-securehost` located in the `us-central1-a` region: - - ```bash - gcloud compute reset-windows-password vm-bastionhost --user app_admin --zone us-central1-a - ``` - - ![RDP_vm-bastionhost_creds](images/RDP_vm-bastionhost_creds.webp#center) - - > **Note**: Take note of the password that is generated for the user account. You will need this to connect to the bastion host. - - ```bash - gcloud compute reset-windows-password vm-securehost --user app_admin --zone us-central1-a - ``` - - ![RDP_vm-securehost_creds](images/RDP_vm-securehost_creds.webp#center) - - > **Note**: Take note of the password that is generated for the user account. You will need this to connect to the secure host. - -7. Alternatively, you can force a password reset from the Compute Engine console. You will have to repeat this for the second host as the login credentials for that instance will be different. - -### Task 3. Connect to the secure host and configure Internet Information Server - -To connect to the secure host, you have to RDP into the bastion host first, and from there open a second RDP session to connect to the internal private network address of the secure host. A Windows Compute Instance with an external address can be connected to via RDP using the RDP button that appears next to Windows Compute instances in the Compute Instance summary page. - -1. Connect to the bastion host using the RDP button in the Compute Engine console. - - You can install [Chrome RDP](https://chrome.google.com/webstore/detail/chrome-rdp-for-google-clo/mpbbnannobiobpnfblimoapbephgifkm) extension for Google Cloud Platform - - ![RDP_extension](./images/RDP_extension.webp#center) - -2. Go to Compute Engine > VM instances, click RDP on `vm-bastionhost`, fill username with app_admin and password with your copied `vm-bastionhost`'s password. - - ![VM_instances_vm-bastionhost](./images/VM_instances_vm-bastionhost.webp#center) - - ![RDP login](./images/RDP_login.webp#center) - - When connected to a Windows server, you can launch the Microsoft RDP client using the command `mstsc.exe`, or you can search for `Remote Desktop Manager` from the Start menu. This will allow you to connect from the bastion host to other compute instances on the same VPC even if those instances do not have a direct internet connection themselves. - -3. Click Search, search for Remote Desktop Connection and run it -4. Copy and paste the internal ip from `vm-securehost`, click Connect - - ![VM_instances_vm-securehost](./images/VM_instances_vm-securehost.webp#center) - -5. Fill username with app_admin and password with your copied `vm-securehost`'s password -6. Click Search, type Powershell, right click and Run as Administrator -7. Run the following command to install IIS (Internet Information Server) : - - ```powershell - Install-WindowsFeature -name Web-Server -IncludeManagementTools - ``` - - ![IIS](./images/IIS_install.webp#center) - - ![IIS Installation](./images/IIS_install2.webp#center) - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/%2FaI3EMiHeGZc46u89ueTTAEgmRSGj5krSwhpzllr88w%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP304/index.md b/content/writeups/google-cloudskillsboost/GSP304/index.md deleted file mode 100644 index 34c540e..0000000 --- a/content/writeups/google-cloudskillsboost/GSP304/index.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: "[GSP304] Build and Deploy a Docker Image to a Kubernetes Cluster" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-25T06:46:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp304", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "docker", "kubernetes", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 4 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP304/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP304 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 15 minutes
-- Difficulty: Intermediate
-- Price: 5 Credits - -Lab: [GSP304](https://www.cloudskillsboost.google/focuses/1738?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -Your development team is interested in adopting a containerized microservices approach to application architecture. You need to test a sample application they have provided for you to make sure that that it can be deployed to a Google Kubernetes container. The development group provided a simple Go application called `echo-web` with a Dockerfile and the associated context that allows you to build a Docker image immediately. - -## Your challenge - -To test the deployment, you need to download the sample application, then build the Docker container image using a tag that allows it to be stored on the Container Registry. Once the image has been built, you'll push it out to the Container Registry before you can deploy it. - -With the image prepared you can then create a Kubernetes cluster, then deploy the sample application to the cluster. - -1. An application image with a v1 tag has been pushed to the gcr.io repository - - ```bash - mkdir echo-web - cd echo-web - gsutil cp -r gs://$DEVSHELL_PROJECT_ID/echo-web.tar.gz . - tar -xzf echo-web.tar.gz - rm echo-web.tar.gz - cd echo-web - docker build -t echo-app:v1 . - docker tag echo-app:v1 gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v1 - docker push gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v1 - ``` - -2. A new Kubernetes cluster exists (zone: us-central1-a) - - ```bash - gcloud config set compute/zone us-central1-a - - gcloud container clusters create echo-cluster --num-nodes=2 --machine-type=n1-standard-2 - ``` - -3. Check that an application has been deployed to the cluster - - ```bash - kubectl create deployment echo-web --image=gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v1 - ``` - -4. Test that a service exists that responds to requests like Echo-app - - ```bash - kubectl expose deployment echo-web --type=LoadBalancer --port 80 --target-port 8000 - ``` - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/GOodosAwxciMN42hNV4ZqZIwQ5eXORJcUSvZ2SAuXYI%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP305/images/bucket.webp b/content/writeups/google-cloudskillsboost/GSP305/images/bucket.webp deleted file mode 100644 index c586fe5..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP305/images/bucket.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP305/images/kubernetes_cluster.webp b/content/writeups/google-cloudskillsboost/GSP305/images/kubernetes_cluster.webp deleted file mode 100644 index 6230677..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP305/images/kubernetes_cluster.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP305/index.md b/content/writeups/google-cloudskillsboost/GSP305/index.md deleted file mode 100644 index fa6d468..0000000 --- a/content/writeups/google-cloudskillsboost/GSP305/index.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: "[GSP305] Scale Out and Update a Containerized Application on a Kubernetes Cluster" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-25T07:55:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp305", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "container", "kubernetes", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 5 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP304/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP305 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour
-- Difficulty: Intermediate
-- Price: 5 Credits - -Lab: [GSP305](https://www.cloudskillsboost.google/focuses/1739?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -You are taking over ownership of a test environment and have been given an updated version of a containerized test application to deploy. Your systems' architecture team has started adopting a containerized microservice architecture. You are responsible for managing the containerized test web applications. You will first deploy the initial version of a test application, called `echo-app` to a Kubernetes cluster called `echo-cluster` in a deployment called `echo-web`. - -Before you get started, open the navigation menu and select **Cloud Storage**. The last steps in the Deployment Manager script used to set up your environment creates a bucket. - -Refresh the Storage browser until you see your bucket. You can move on once your Console resembles the following: - -![bucket](./images/bucket.webp#center) - -Check to make sure your GKE cluster has been created before continuing. Open the navigation menu and select **Kubernetes Engine** > **Clusters**. - -Continue when you see a green checkmark next to `echo-cluster`: - -![kubernetes cluster](./images/kubernetes_cluster.webp#center) - -To deploy your first version of the application, run the following commands in Cloud Shell to get up and running: - -```bash -gcloud container clusters get-credentials echo-cluster --zone=us-central1-a -``` - -```bash -kubectl create deployment echo-web --image=gcr.io/qwiklabs-resources/echo-app:v1 -``` - -```bash -kubectl expose deployment echo-web --type=LoadBalancer --port 80 --target-port 8000 -``` - -## Your challenge - -You need to update the running `echo-app` application in the `echo-web` deployment from the v1 to the v2 code you have been provided. You must also scale out the application to 2 instances and confirm that they are all running. - -1. Check that there is a tagged image in gcr.io for echo-app:v2. - - ```bash - mkdir echo-web - cd echo-web - gsutil cp -r gs://$DEVSHELL_PROJECT_ID/echo-web-v2.tar.gz . - tar -xzf echo-web-v2.tar.gz - rm echo-web-v2.tar.gz - docker build -t echo-app:v2 . - docker tag echo-app:v2 gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v2 - docker push gcr.io/$DEVSHELL_PROJECT_ID/echo-app:v2 - ``` - -2. Echo-app:v2 is running on the Kubernetes cluster. - - Deploy the first version of the application. - - ```bash - gcloud container clusters get-credentials echo-cluster --zone=us-central1-a - kubectl create deployment echo-web --image=gcr.io/qwiklabs-resources/echo-app:v1 - kubectl expose deployment echo-web --type=LoadBalancer --port 80 --target-port 8000 - ``` - - Edit the `deployment.apps` file. - - ```bash - kubectl edit deploy echo-web - ``` - - Start the editor by type `i`. Change `image=...:v1` to `image=...:v2`. - - `image=gcr.io/qwiklabs-resources/echo-app:v2` - - Save the `deployment.apps` file, hit **ESC** then type `:wq` and **Enter**. - -3. The Kubernetes cluster deployment reports 2 replicas. - - ```bash - kubectl scale deployment echo-web --replicas=2 - ``` - -4. The application must respond to web requests with V2.0.0. - - ```bash - kubectl expose deployment echo-web --type=LoadBalancer --port 80 --target-port 8000 - - kubectl get svc - ``` - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/GOodosAwxciMN42hNV4ZqZIwQ5eXORJcUSvZ2SAuXYI%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/DB_host.webp b/content/writeups/google-cloudskillsboost/GSP306/images/DB_host.webp deleted file mode 100644 index 4f2e632..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/DB_host.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/DB_host2.webp b/content/writeups/google-cloudskillsboost/GSP306/images/DB_host2.webp deleted file mode 100644 index 62ed7fc..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/DB_host2.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/IP_demo_blog_site.webp b/content/writeups/google-cloudskillsboost/GSP306/images/IP_demo_blog_site.webp deleted file mode 100644 index cc72860..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/IP_demo_blog_site.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/SQL_instance.webp b/content/writeups/google-cloudskillsboost/GSP306/images/SQL_instance.webp deleted file mode 100644 index 02a8676..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/SQL_instance.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/SSH_blog.webp b/content/writeups/google-cloudskillsboost/GSP306/images/SSH_blog.webp deleted file mode 100644 index c993617..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/SSH_blog.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/blog_demo.webp b/content/writeups/google-cloudskillsboost/GSP306/images/blog_demo.webp deleted file mode 100644 index 0ad43e7..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/blog_demo.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/images/vm_instances.webp b/content/writeups/google-cloudskillsboost/GSP306/images/vm_instances.webp deleted file mode 100644 index 4555813..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP306/images/vm_instances.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP306/index.md b/content/writeups/google-cloudskillsboost/GSP306/index.md deleted file mode 100644 index aee77e5..0000000 --- a/content/writeups/google-cloudskillsboost/GSP306/index.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -title: "[GSP306] Migrate a MySQL Database to Google Cloud SQL" -description: "" -summary: "Quest: Cloud Architecture: Design, Implement, and Manage" -date: 2023-05-25T12:19:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp306", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "mysql", "database", "cloud-computing", "cloud", "cloud-architecture"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 6 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP304/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP306 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 15 minutes
-- Difficulty: Advanced
-- Price: 7 Credits - -Lab: [GSP306](https://www.cloudskillsboost.google/focuses/1740?parent=catalog)
-Quest: [Cloud Architecture: Design, Implement, and Manage](https://www.cloudskillsboost.google/quests/124)
- -## Challenge scenario - -Your WordPress blog is running on a server that is no longer suitable. As the first part of a complete migration exercise, you are migrating the locally hosted database used by the blog to Cloud SQL. - -The existing WordPress installation is installed in the `/var/www/html/wordpress` directory in the instance called `blog` that is already running in the lab. You can access the blog by opening a web browser and pointing to the external IP address of the blog instance. - -The existing database for the blog is provided by MySQL running on the same server. The existing MySQL database is called `wordpress` and the user called **blogadmin** with password __Password1*__, which provides full access to that database. - -## Your challenge - -- You need to create a new Cloud SQL instance to host the migrated database -- Once you have created the new database and configured it, you can then create a database dump of the existing database and import it into Cloud SQL. -- When the data has been migrated, you will then reconfigure the blog software to use the migrated database. - -For this lab, the WordPress site configuration file is located here: `/var/www/html/wordpress/wp-config.php.` - -To sum it all up, your challenge is to migrate the database to Cloud SQL and then reconfigure the application so that it no longer relies on the local MySQL database. Good luck! - -1. Check that there is a Cloud SQL instance. - - Go to cloud shell and run the following command: - - ```bash - export ZONE=us-central1-a - - gcloud sql instances create wordpress --tier=db-n1-standard-1 --activation-policy=ALWAYS --zone $ZONE - ``` - - > **Note**: It will take a several times to create the instance. - - Run the following command: - - ```bash - export ADDRESS=[IP_ADDRESS]/32 - ``` - - Change the `[IP_ADDRESS]` with IP Address from `Demo Blog Site` field - - ![IP demo blog site](./images/IP_demo_blog_site.webp#center) - - or from the External IP of the `blog` instance in VM Compute Engine. - - ![External IP blog instance](./images/vm_instances.webp#center) - - For example: - - ```bash - export ADDRESS=104.196.226.155/32 - ``` - - Run the following command: - - ```bash - gcloud sql users set-password --host % root --instance wordpress --password Password1* - - gcloud sql instances patch wordpress --authorized-networks $ADDRESS --quiet - ``` - -2. Check that there is a user database on the Cloud SQL instance. - - - In the **Cloud Console**, click the **Navigation menu** > **Compute Engine** > **VM Instances**. - - Click on the SSH button next to `blog` instance. - - Run the following command: - - ```bash - MYSQLIP=$(gcloud sql instances describe wordpress --format="value(ipAddresses.ipAddress)") - - mysql --host=$MYSQLIP \ - --user=root --password - ``` - - > **Note**: Enter the password with __Password1*__ - - And then run the following command: - - ```sql - CREATE DATABASE wordpress; - CREATE USER 'blogadmin'@'%' IDENTIFIED BY 'Password1*'; - GRANT ALL PRIVILEGES ON wordpress.* TO 'blogadmin'@'%'; - FLUSH PRIVILEGES; - ``` - - - type `exit` to exit the mysql shell. - -3. Check that the blog instance is authorized to access Cloud SQL. - - In the `blog` SSH instance, run the following command: - - ```bash - sudo mysqldump -u root -p Password1* wordpress > wordpress_backup.sql - - mysql --host=$MYSQLIP --user=root -pPassword1* --verbose wordpress < wordpress_backup.sql - - sudo service apache2 restart - ``` - -4. Check that wp-config.php points to the Cloud SQL instance. - - Run the following command: - - ```bash - cd /var/www/html/wordpress/ - - sudo nano wp-config.php - ``` - - - Replace `localhost` string on `DB_HOST` with **Public IP address** of SQL Instance that has copied before. - - ![Public IP SQL Instance](./images/SQL_instance.webp#center) - - From this: - - ![DB Host](./images/DB_host.webp#center) - - To this: - - ![DB Host 2](./images/DB_host2.webp#center) - - - Press **Ctrl + O** and then press **Enter** to save your edited file. Press **Ctrl + X** to exit the nano editor. - - Exit the SSH. - -5. Check that the blog still responds to requests. - - - In the **Cloud Console**, click the **Navigation menu** > **Compute Engine** > **VM Instances**. - - Click the **External IP** of the `blog` instance. - - Verify that no error. - - ![Blog demo site](./images/blog_demo.webp#center) - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/GOodosAwxciMN42hNV4ZqZIwQ5eXORJcUSvZ2SAuXYI%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable.webp b/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable.webp deleted file mode 100644 index 2dd27aa..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable2.webp b/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable2.webp deleted file mode 100644 index dbce1e0..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP313/images/labs_variable2.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP313/images/machine-type.webp b/content/writeups/google-cloudskillsboost/GSP313/images/machine-type.webp deleted file mode 100644 index c9fc657..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP313/images/machine-type.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP313/images/zone_variable_task2.webp b/content/writeups/google-cloudskillsboost/GSP313/images/zone_variable_task2.webp deleted file mode 100644 index ae0bbf4..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP313/images/zone_variable_task2.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP313/index.md b/content/writeups/google-cloudskillsboost/GSP313/index.md deleted file mode 100644 index 577bf30..0000000 --- a/content/writeups/google-cloudskillsboost/GSP313/index.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: "[GSP313] Create and Manage Cloud Resources: Challenge Lab" -description: "" -summary: "Quest: Create and Manage Cloud Resources" -date: 2023-05-22T08:13:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp313", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "kubernetes", "load-balancer", "cloud-computing"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 7 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP313/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP313 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour
-- Difficulty: Introductory
-- Price: 1 Credit - -Lab: [GSP313](https://www.cloudskillsboost.google/focuses/10258?parent=catalog)
-Quest: [Create and Manage Cloud Resources](https://www.cloudskillsboost.google/quests/120)
- -## Challenge scenario - -You have started a new role as a Junior Cloud Engineer for Jooli, Inc. You are expected to help manage the infrastructure at Jooli. Common tasks include provisioning resources for projects. - -You are expected to have the skills and knowledge for these tasks, so step-by-step guides are not provided. - -Some Jooli, Inc. standards you should follow: - -Create all resources in the default region or zone, unless otherwise directed. - -Naming normally uses the format _team-resource_; for example, an instance could be named **nucleus-webserver1**. - -Allocate cost-effective resource sizes. Projects are monitored, and excessive resource use will result in the containing project's termination (and possibly yours), so plan carefully. This is the guidance the monitoring team is willing to share: unless directed, use **f1-micro** for small Linux VMs, and use **n1-standard-1** for Windows or other applications, such as Kubernetes nodes. - -## Your challenge - -As soon as you sit down at your desk and open your new laptop, you receive several requests from the Nucleus team. Read through each description, and then create the resources. - -## Setup - -Export the following environment variables using the values specific to your labs instruction. - -```bash -export INSTANCE_NAME= -export ZONE= -export REGION= -export PORT= -export FIREWALL_NAME= -``` - -![labs variable](./images/labs_variable.webp#center) - -You can find the zone in Task 2 description. - -![zone_variable_task2](./images/zone_variable_task2.webp#center) - -Region is just the first part of the zone. For example, if the zone is `us-east1-b`, then the region is `us-east1`. - -Example: - -```bash -export INSTANCE_NAME=nucleus-jumphost-295 -export ZONE=us-central1-b -export REGION=us-central1 -export PORT=8080 -export FIREWALL_NAME=accept-tcp-rule-633 -``` - -### Task 1. Create a project jumphost instance - -**_Beware with machine-type, maybe have different with me, dont forget to change_**
-![machine-type](./images/machine-type.webp#center) - -Go to cloud shell and run the following command: - -```bash -gcloud compute instances create $INSTANCE_NAME \ - --network nucleus-vpc \ - --zone $ZONE \ - --machine-type e2-micro \ - --image-family debian-10 \ - --image-project debian-cloud -``` - -### Task 2. Create a Kubernetes service cluster - -Go to cloud shell and run the following command: - -```bash -gcloud container clusters create nucleus-backend \ ---num-nodes 1 \ ---network nucleus-vpc \ ---zone $ZONE - -gcloud container clusters get-credentials nucleus-backend \ ---zone $ZONE -``` - -- Use the Docker container hello-app (`gcr.io/google-samples/hello-app:2.0`) as place holder. - -```bash -kubectl create deployment hello-server \ ---image=gcr.io/google-samples/hello-app:2.0 -``` - -- Expose the app on port `APP_PORT_NUMBER`. - -```bash -kubectl expose deployment hello-server \ ---type=LoadBalancer \ ---port $PORT -``` - -### Task 3. Set up an HTTP load balancer - -1. Create startup-script. - - ```bash - cat << EOF > startup.sh - #! /bin/bash - apt-get update - apt-get install -y nginx - service nginx start - sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html - EOF - ``` - -2. Create instance template. - - ```bash - gcloud compute instance-templates create web-server-template \ - --metadata-from-file startup-script=startup.sh \ - --network nucleus-vpc \ - --machine-type g1-small \ - --region $ZONE - ``` - -3. Create target pool. - - ```bash - gcloud compute target-pools create nginx-pool --region=$REGION - ``` - -4. Create managed instance group. - - ```bash - gcloud compute instance-groups managed create web-server-group \ - --base-instance-name web-server \ - --size 2 \ - --template web-server-template \ - --region $REGION - ``` - -5. Create firewall rule named as `FIREWALL_RULE` to allow traffic (80/tcp). - - ```bash - gcloud compute firewall-rules create $FIREWALL_NAME \ - --allow tcp:80 \ - --network nucleus-vpc - ``` - -6. Create health check. - - ```bash - gcloud compute http-health-checks create http-basic-check - gcloud compute instance-groups managed \ - set-named-ports web-server-group \ - --named-ports http:80 \ - --region $REGION - ``` - -7. Create backend service, and attach the managed instance group with named port (http:80). - - ```bash - gcloud compute backend-services create web-server-backend \ - --protocol HTTP \ - --http-health-checks http-basic-check \ - --global - - gcloud compute backend-services add-backend web-server-backend \ - --instance-group web-server-group \ - --instance-group-region $REGION \ - --global - ``` - -8. Create URL map and target the HTTP proxy to route requests to your URL map. - - ```bash - gcloud compute url-maps create web-server-map \ - --default-service web-server-backend - - gcloud compute target-http-proxies create http-lb-proxy \ - --url-map web-server-map - ``` - -9. Create forwarding rule. - - ```bash - gcloud compute forwarding-rules create http-content-rule \ - --global \ - --target-http-proxy http-lb-proxy \ - --ports 80 - - gcloud compute forwarding-rules create $FIREWALL_NAME \ - --global \ - --target-http-proxy http-lb-proxy \ - --ports 80 - gcloud compute forwarding-rules list - ``` - -> **Note**: Just wait for the load balancer to finish setting up. It may take a few minutes. If you get an error checkmark, wait a few moments and try again. - -10. Testing traffic sent to your instances. (**Optional**) - -- In the **Cloud Console**, click the **Navigation menu** > **Network services** > **Load balancing**. -- Click on the load balancer that you just created (`web-server-map`). -- In the **Backend** section, click on the name of the backend and confirm that the VMs are **Healthy**. If they are not healthy, wait a few moments and try reloading the page. -- When the VMs are healthy, test the load balancer using a web browser, going to `http://IP_ADDRESS/`, replacing `IP_ADDRESS` with the load balancer's IP address. - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/%2FaI3EMiHeGZc46u89ueTTAEgmRSGj5krSwhpzllr88w%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP315/images/code_function.webp b/content/writeups/google-cloudskillsboost/GSP315/images/code_function.webp deleted file mode 100644 index 1d69a34..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP315/images/code_function.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP315/images/labs_variable.webp b/content/writeups/google-cloudskillsboost/GSP315/images/labs_variable.webp deleted file mode 100644 index 22e2455..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP315/images/labs_variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP315/images/map.webp b/content/writeups/google-cloudskillsboost/GSP315/images/map.webp deleted file mode 100644 index 8a1857f..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP315/images/map.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP315/images/package-json.webp b/content/writeups/google-cloudskillsboost/GSP315/images/package-json.webp deleted file mode 100644 index e1617ae..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP315/images/package-json.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP315/images/trigger.webp b/content/writeups/google-cloudskillsboost/GSP315/images/trigger.webp deleted file mode 100644 index ec6a7da..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP315/images/trigger.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP315/index.md b/content/writeups/google-cloudskillsboost/GSP315/index.md deleted file mode 100644 index 096ba0b..0000000 --- a/content/writeups/google-cloudskillsboost/GSP315/index.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: "[GSP315] Perform Foundational Infrastructure Tasks in Google Cloud: Challenge Lab" -description: "" -summary: "Quest: Perform Foundational Infrastructure Tasks in Google Cloud" -date: 2023-05-21T08:21:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp315", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "cloud-functions", "cloud-storage", "pubsub"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 8 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP315/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP315 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour
-- Difficulty: Introductory
-- Price: 1 Credit - -Lab: [GSP315](https://www.cloudskillsboost.google/focuses/10379?parent=catalog)
-Quest: [Perform Foundational Infrastructure Tasks in Google Cloud](https://www.cloudskillsboost.google/quests/118)
- -## Challenge scenario - -You are just starting your junior cloud engineer role with Jooli inc. So far you have been helping teams create and manage Google Cloud resources. - -You are expected to have the skills and knowledge for these tasks so don’t expect step-by-step guides. - -## Your challenge - -You are now asked to help a newly formed development team with some of their initial work on a new project around storing and organizing photographs, called memories. You have been asked to assist the memories team with initial configuration for their application development environment; you receive the following request to complete the following tasks: - -- Create a bucket for storing the photographs. -- Create a Pub/Sub topic that will be used by a Cloud Function you create. -- Create a Cloud Function. -- Remove the previous cloud engineer’s access from the memories project. - -Some Jooli Inc. standards you should follow: - -- Create all resources in the **us-east1** region and **us-east1-b** zone, unless otherwise directed. -- Use the project VPCs. -- Naming is normally _team-resource_, e.g. an instance could be named **kraken-webserver1**. -- Allocate cost effective resource sizes. Projects are monitored and excessive resource use will result in the containing project's termination (and possibly yours), so beware. This is the guidance the monitoring team is willing to share; unless directed, use **f1-micro** for small Linux VMs and **n1-standard-1** for Windows or other applications such as Kubernetes nodes. - -Each task is described in detail below, good luck! - -### Task 1. Create a bucket - -- You need to create a bucket called `Bucket Name` for the storage of the photographs. - -Go to cloud shell and run the following command to create a bucket. - -Replace `[BUCKET_NAME]` with the name of the bucket in the lab instructions. - -![Labs Variable](./images/labs_variable.webp#center) - -```bash -gsutil mb gs://[BUCKET_NAME]/ -``` - -### Task 2. Create a Pub/Sub topic - -- Create a Pub/Sub topic called `Topic Name` for the Cloud Function to send messages. - -Go to cloud shell and run the following command to create a Pub/Sub topic. - -Replace `[TOPIC_NAME]` with the name of the bucket in the lab instructions. - -```bash -gcloud pubsub topics create [TOPIC_NAME] -``` - -### Task 3. Create the thumbnail Cloud Function - -1. In the **Cloud Console**, click the **Navigation menu** > **Cloud Functions**. -2. Click **Create function**. -3. In the **Create function** dialog, enter the following values: - - - Function Name: `CLOUD_FUNCTION_NAME`, change the name of the function in the lab instructions. - - Trigger: Cloud Storage - - Event Type: Finalizing/Creating - - Bucket: `BUCKET_NAME` - - ![trigger](./images/trigger.webp#center) - - - Click **_Save_**. - - Click **_Next_**. - - Runtime: Node.js 14 - - Entry Point (Function to execute): thumbnail - - Source Code: Inline editor - - Replace code for index.js and package.json - - In `line 15` of `index.js` replace the text **REPLACE_WITH_YOUR_TOPIC_NAME** with the `TOPIC_NAME` you created in task 2. - - `index.js`: - - ```JavaScript - /* globals exports, require */ - //jshint strict: false - //jshint esversion: 6 - "use strict"; - const crc32 = require("fast-crc32c"); - const { Storage } = require('@google-cloud/storage'); - const gcs = new Storage(); - const { PubSub } = require('@google-cloud/pubsub'); - const imagemagick = require("imagemagick-stream"); - exports.thumbnail = (event, context) => { - const fileName = event.name; - const bucketName = event.bucket; - const size = "64x64" - const bucket = gcs.bucket(bucketName); - const topicName = "REPLACE_WITH_YOUR_TOPIC_NAME"; - const pubsub = new PubSub(); - if ( fileName.search("64x64_thumbnail") == -1 ){ - // doesn't have a thumbnail, get the filename extension - var filename_split = fileName.split('.'); - var filename_ext = filename_split[filename_split.length - 1]; - var filename_without_ext = fileName.substring(0, fileName.length - filename_ext.length ); - if (filename_ext.toLowerCase() == 'png' || filename_ext.toLowerCase() == 'jpg'){ - // only support png and jpg at this point - console.log(`Processing Original: gs://${bucketName}/${fileName}`); - const gcsObject = bucket.file(fileName); - let newFilename = filename_without_ext + size + '_thumbnail.' + filename_ext; - let gcsNewObject = bucket.file(newFilename); - let srcStream = gcsObject.createReadStream(); - let dstStream = gcsNewObject.createWriteStream(); - let resize = imagemagick().resize(size).quality(90); - srcStream.pipe(resize).pipe(dstStream); - return new Promise((resolve, reject) => { - dstStream - .on("error", (err) => { - console.log(`Error: ${err}`); - reject(err); - }) - .on("finish", () => { - console.log(`Success: ${fileName} → ${newFilename}`); - // set the content-type - gcsNewObject.setMetadata( - { - contentType: 'image/'+ filename_ext.toLowerCase() - }, function(err, apiResponse) {}); - pubsub - .topic(topicName) - .publisher() - .publish(Buffer.from(newFilename)) - .then(messageId => { - console.log(`Message ${messageId} published.`); - }) - .catch(err => { - console.error('ERROR:', err); - }); - }); - }); - } - else { - console.log(`gs://${bucketName}/${fileName} is not an image I can handle`); - } - } - else { - console.log(`gs://${bucketName}/${fileName} already has a thumbnail`); - } - }; - ``` - - Look like this: - - ![code_function](./images/code_function.webp#center) - - `package.json`: - - ```json - { - "name": "thumbnails", - "version": "1.0.0", - "description": "Create Thumbnail of uploaded image", - "scripts": { - "start": "node index.js" - }, - "dependencies": { - "@google-cloud/pubsub": "^2.0.0", - "@google-cloud/storage": "^5.0.0", - "fast-crc32c": "1.0.4", - "imagemagick-stream": "4.1.1" - }, - "devDependencies": {}, - "engines": { - "node": ">=4.3.2" - } - } - ``` - - Like this: - - ![package-json](./images/package-json.webp#center) - - - Click **Deploy**. - -4. Download this [image](https://storage.googleapis.com/cloud-training/gsp315/map.jpg) to your local machine or download this map image below. - ![map image](./images/map.webp#center) -5. In the console, click the **Navigation menu** > **Cloud Storage** > **Buckets**. -6. Click the name of the bucket that you created. -7. In the **Objects** tab, click **Upload files**. -8. In the file dialog, go to the file that you downloaded and select it. -9. Click **Refresh Bucket**. -10. Verify that the thumbnail image was created. -11. If you getting error, you can upload the image again. - -### Task 4. Remove the previous cloud engineer - -1. In the console, click the **Navigation menu** > **IAM & Admin** > **IAM**. -2. Search for the previous cloud engineer (`Username 2` with the role of Viewer). -3. Click the **pencil icon** to edit, and then select the **trash icon** to delete role. -4. Click **Save**. - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/Hgcj1JOh2iuL7imDUME0%2BjEemAfZlnOJoEHsVFIVQCY%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP319/images/export variable.webp b/content/writeups/google-cloudskillsboost/GSP319/images/export variable.webp deleted file mode 100644 index ca843aa..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP319/images/export variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP319/images/fancy store.webp b/content/writeups/google-cloudskillsboost/GSP319/images/fancy store.webp deleted file mode 100644 index 6fa038f..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP319/images/fancy store.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get services.webp b/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get services.webp deleted file mode 100644 index dfb75c2..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get services.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get svc.webp b/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get svc.webp deleted file mode 100644 index 34e14ae..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP319/images/kubectl get svc.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP319/images/labs variable.webp b/content/writeups/google-cloudskillsboost/GSP319/images/labs variable.webp deleted file mode 100644 index c13ea8f..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP319/images/labs variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP319/index.md b/content/writeups/google-cloudskillsboost/GSP319/index.md deleted file mode 100644 index a97f42d..0000000 --- a/content/writeups/google-cloudskillsboost/GSP319/index.md +++ /dev/null @@ -1,359 +0,0 @@ ---- -title: "[GSP319] Build a Website on Google Cloud: Challenge Lab" -description: "" -summary: "Quest: Build a Website on Google Cloud" -date: 2023-05-19T03:30:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp319", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "kubernetes", "container", "microservice"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 9 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP319/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP319 - -![](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 30 minutes
-- Difficulty: Intermediate
-- Price: 5 Credits - -Lab: [GSP319](https://www.cloudskillsboost.google/focuses/11765?parent=catalog)
-Quest: [Build a Website on Google Cloud](https://www.cloudskillsboost.google/quests/115)
- -## Challenge lab scenario - -You have just started a new role at FancyStore, Inc. - -Your task is to take the company's existing monolithic e-commerce website and break it into a series of logically separated microservices. The existing monolith code is sitting in a GitHub repo, and you will be expected to containerize this app and then refactor it. - -You are expected to have the skills and knowledge for these tasks, so don't expect step-by-step guides. - -You have been asked to take the lead on this, after the last team suffered from monolith-related burnout and left for greener pastures (literally, they are running a lavender farm now). You will be tasked with pulling down the source code, building a container from it (one of the farmers left you a Dockerfile), and then pushing it out to GKE. - -You should first build, deploy, and test the Monolith, just to make sure that the source code is sound. After that, you should break out the constituent services into their own microservice deployments. - -Some FancyStore, Inc. standards you should follow: - -- Create your cluster in `us-central1`. -- Naming is normally *team-resource*, e.g. an instance could be named **fancystore-orderservice1**. -- Allocate cost effective resource sizes. Projects are monitored and excessive resource use will result in the containing project's termination. -- Use the `n1-standard-1` machine type unless directed otherwise. - -## Your challenge - -As soon as you sit down at your desk and open your new laptop, you receive the following request to complete these tasks. Good luck! - -## Setup - -Export the following variables in the Cloud Shell: - -```bash -export MONOLITH_IDENTIFIER= -export CLUSTER_NAME= -export ORDERS_IDENTIFIER= -export PRODUCTS_IDENTIFIER= -export FRONTEND_IDENTIFIER= -``` - -from the labs variables, you can copy the value of each variable and paste it in the cloud shell - -![labs variable](./images/labs%20variable.webp#center) - -> **Note**: Don't forget to replace the value of each variable with the value of the labs variable - -like this -![export variable](./images/export%20variable.webp#center) - -> **Note**: Don't forget to enable the API's - -```bash -gcloud services enable cloudbuild.googleapis.com -gcloud services enable container.googleapis.com -``` - -### Task 1: Download the monolith code and build your container - -First things first, you'll need to [clone your team's git repo](https://github.com/googlecodelabs/monolith-to-microservices.git). - -```bash -git clone https://github.com/googlecodelabs/monolith-to-microservices.git -``` - -There's a `setup.sh` script in the root directory of the project that you'll need to run to get your monolith container built up. - -``` bash -cd ~/monolith-to-microservices - -./setup.sh -``` - -After running the setup.sh script, ensure your Cloud Shell is running its latest version of nodeJS. - -```bash -nvm install --lts -``` - -There's a Dockerfile located in the `~/monotlith-to-microservices/monolith` folder which you can use to build the application container. Before building the Docker container, you can preview the monolith application on **port 8080**. - -> Note: You can skip previewing the application if you want to, but it's a good idea to make sure it's working before you containerize it. - -```bash -cd ~/monolith-to-microservices/monolith - -npm start -``` - -`CTRL+C` to stop the application. - -You will have to run Cloud Build (in that monolith folder) to build it, then push it up to GCR. Name your artifact as follows: - -- GCR Repo: gcr.io/${GOOGLE_CLOUD_PROJECT} -- Image name: `MONOLITH_IDENTIFIER` -- Image version: 1.0.0 - -```bash -gcloud services enable cloudbuild.googleapis.com - -gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${MONOLITH_IDENTIFIER}:1.0.0 . -``` - -### Task 2: Create a kubernetes cluster and deploy the application - -Create your cluster as follows: - -- Cluster name: `CLUSTER_NAME` -- Region: us-central1-a -- Node count: 3 - -```bash -gcloud config set compute/zone us-central1-a - -gcloud services enable container.googleapis.com - -gcloud container clusters create $CLUSTER_NAME --num-nodes 3 - -gcloud container clusters get-credentials $CLUSTER_NAME -``` - -Create and expose your deployment as follows: - -- Cluster name: `CLUSTER_NAME` -- Container name: `MONOLITH_IDENTIFIER` -- Container version: 1.0.0 -- Application port: 8080 -- Externally accessible port: 80 - -```bash -kubectl create deployment $MONOLITH_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${MONOLITH_IDENTIFIER}:1.0.0 - -kubectl expose deployment $MONOLITH_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8080 -``` - -Make note of the IP address that is assigned in the expose deployment operation. Use this command to get the IP address: - -```bash -kubectl get service -``` - -![kubectl get service](./images/kubectl%20get%20services.webp#center) - -You should now be able to visit this IP address from your browser and see the following: - -![fancy store](./images/fancy%20store.webp#center) - -### Task 3. Create new microservices - -Below is the set of services which need to be containerized. Navigate to the source roots mentioned below, and upload the artifacts which are created to the Google Container Registry with the metadata indicated. Name your artifact as follows: - -**Orders Microservice** - -- Service root folder: `~/monolith-to-microservices/microservices/src/orders` -- GCR Repo: gcr.io/${GOOGLE_CLOUD_PROJECT} -- Image name: `ORDERS_IDENTIFIER` -- Image version: 1.0.0 - -```bash -cd ~/monolith-to-microservices/microservices/src/orders - -gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${ORDERS_IDENTIFIER}:1.0.0 . -``` - -**Products Microservice** - -- Service root folder: `~/monolith-to-microservices/microservices/src/products` -- GCR Repo: gcr.io/${GOOGLE_CLOUD_PROJECT} -- Image name: `PRODUCTS_IDENTIFIER` -- Image version: 1.0.0 - -```bash -cd ~/monolith-to-microservices/microservices/src/products - -gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${PRODUCTS_IDENTIFIER}:1.0.0 . -``` - -### Task 4: Deploy the new microservices - -Deploy these new containers following the same process that you followed for the `MONOLITH_IDENTIFIER` monolith. Note that these services will be listening on different ports, so make note of the port mappings in the table below. Create and expose your deployments as follows: - -**Orders Microservice** - -- Cluster name: `CLUSTER_NAME` -- Container name: `ORDERS_IDENTIFIER` -- Container version: 1.0.0 -- Application port: 8081 -- Externally accessible port: 80 - -```bash -kubectl create deployment $ORDERS_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${ORDERS_IDENTIFIER}:1.0.0 - -kubectl expose deployment $ORDERS_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8081 -``` - -**Products Microservice** - -- Cluster name: `CLUSTER_NAME` -- Container name: `PRODUCTS_IDENTIFIER` -- Container version: 1.0.0 -- Application port: 8082 -- Externally accessible port: 80 - -```bash -kubectl create deployment $PRODUCTS_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${PRODUCTS_IDENTIFIER}:1.0.0 - -kubectl expose deployment $PRODUCTS_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8082 -``` - -Get the external IP addresses for the Orders and Products microservices: - -```bash -kubectl get svc -w -``` - -`CTRL+C` to stop the command. - -Now you can verify that the deployments were successful and that the services have been exposed by going to the following URLs in your browser: - -- `http://ORDERS_EXTERNAL_IP/api/orders` -- `http://PRODUCTS_EXTERNAL_IP/api/products` - -Write down the IP addresses for the Orders and Products microservices. You will need them in the next task. - -### Task 5. Configure and deploy the Frontend microservice - ->**Note**: You can use the lab method or use my method. **Choose one that suits you**. - -1. My method (Using [sed](https://linux.die.net/man/1/sed) (stream editor) and using one-line command) - -```bash -export ORDERS_SERVICE_IP=$(kubectl get services -o jsonpath="{.items[1].status.loadBalancer.ingress[0].ip}") - -export PRODUCTS_SERVICE_IP=$(kubectl get services -o jsonpath="{.items[2].status.loadBalancer.ingress[0].ip}") -``` - -```bash -cd ~/monolith-to-microservices/react-app -sed -i "s/localhost:8081/$ORDERS_SERVICE_IP/g" .env -sed -i "s/localhost:8082/$PRODUCTS_SERVICE_IP/g" .env -npm run build -``` - -2. The lab method (Using [nano](https://linux.die.net/man/1/nano) text editor) - -Use the `nano` editor to replace the local URL with the IP address of the new Products microservices. - -```bash -cd ~/monolith-to-microservices/react-app -nano .env -``` - -When the editor opens, your file should look like this. - -```bash -REACT_APP_ORDERS_URL=http://localhost:8081/api/orders -REACT_APP_PRODUCTS_URL=http://localhost:8082/api/products -``` - -Replace the `REACT_APP_ORDERS_URL` and `REACT_APP_PRODUCTS_URL` to the new format while replacing with your Orders and Product microservice IP addresses so it matches below. - -```bash -REACT_APP_ORDERS_URL=http:///api/orders -REACT_APP_PRODUCTS_URL=http:///api/products -``` - -Press **CTRL+O**, press **ENTER**, then **CTRL+X** to save the file in the `nano` editor. Now rebuild the frontend app before containerizing it. - -```bash -npm run build -``` - -### Task 6: Create a containerized version of the Frontend microservice - -The final step is to containerize and deploy the Frontend. Use Cloud Build to package up the contents of the Frontend service and push it up to the Google Container Registry. - -- Service root folder: `~/monolith-to-microservices/microservices/src/frontend` -- GCR Repo: gcr.io/${GOOGLE_CLOUD_PROJECT} -- Image name: `FRONTEND_IDENTIFIER` -- Image version: 1.0.0 - -```bash -cd ~/monolith-to-microservices/microservices/src/frontend - -gcloud builds submit --tag gcr.io/${GOOGLE_CLOUD_PROJECT}/${FRONTEND_IDENTIFIER}:1.0.0 . -``` - -### Task 7: Deploy the Frontend microservice - -Deploy this container following the same process that you followed for the **Orders** and **Products** microservices. Create and expose your deployment as follows: - -- Cluster name: `CLUSTER_NAME` -- Container name: `FRONTEND_IDENTIFIER` -- Container version: 1.0.0 -- Application port: 8080 -- Externally accessible port: 80 - -```bash -kubectl create deployment $FRONTEND_IDENTIFIER --image=gcr.io/${GOOGLE_CLOUD_PROJECT}/${FRONTEND_IDENTIFIER}:1.0.0 - -kubectl expose deployment $FRONTEND_IDENTIFIER --type=LoadBalancer --port 80 --target-port 8080 -``` - -```bash -kubectl get svc -w -``` - -`CTRL+C` to stop the command. - -![kubectl get svc](./images/kubectl%20get%20svc.webp#center) - -Wait until you see the external IP address and check the progress. - -## Congratulations! - -![](https://cdn.qwiklabs.com/tDSBmZi3kH7QdPue8oTiKmR0kVc3UTudveGazkCgmxw%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP322/images/bastion_ssh.webp b/content/writeups/google-cloudskillsboost/GSP322/images/bastion_ssh.webp deleted file mode 100644 index b1fa683..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP322/images/bastion_ssh.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP322/images/lab_variable.webp b/content/writeups/google-cloudskillsboost/GSP322/images/lab_variable.webp deleted file mode 100644 index e6fc84f..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP322/images/lab_variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP322/images/vm_instances.webp b/content/writeups/google-cloudskillsboost/GSP322/images/vm_instances.webp deleted file mode 100644 index 81fb61d..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP322/images/vm_instances.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP322/index.md b/content/writeups/google-cloudskillsboost/GSP322/index.md deleted file mode 100644 index 3732c66..0000000 --- a/content/writeups/google-cloudskillsboost/GSP322/index.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: "[GSP322] Build and Secure Networks in Google Cloud: Challenge Lab" -description: "" -summary: "Quest: Build and Secure Networks in Google Cloud" -date: 2023-05-26T01:08:03+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp322", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "firewall", "ssh", "bastion-host", "vpc", "iap"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 10 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP322/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP322 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour
-- Difficulty: Advanced
-- Price: 7 Credits - -Lab: [GSP322](https://www.cloudskillsboost.google/focuses/12068?parent=catalog)
-Quest: [Build and Secure Networks in Google Cloud](https://www.cloudskillsboost.google/quests/128)
- -## Setup - -Define the environment variables: - -```bash -export IAP_NETWORK_TAG= -export INTERNAL_NETWORK_TAG= -export HTTP_NETWORK_TAG= -export ZONE= -``` - -Fill the variables with the values from the lab - -For the zone you can check first. In the console, click the **Navigation menu** > **Compute Engine** > **VM Instance**. In my case I used `us-east1-b` - -![SSH to bastion](./images/vm_instances.webp#center) - -To list all available zones: - -```bash -gcloud compute zones list -``` - -Reference: [gcloud compute zones list](https://cloud.google.com/sdk/gcloud/reference/compute/zones/list) - -![Lab Variable](./images/lab_variable.webp#center) - -For example in my case: - -```bash -export IAP_NETWORK_TAG=allow-ssh-iap-ingress-ql-901 -export INTERNAL_NETWORK_TAG=allow-ssh-internal-ingress-ql-803 -export HTTP_NETWORK_TAG=allow-http-ingress-ql-982 -export ZONE=us-east1-b -``` - -## Challenge scenario - -You are a security consultant brought in by Jeff, who owns a small local company, to help him with his very successful website (juiceshop). Jeff is new to Google Cloud and had his neighbour's son set up the initial site. The neighbour's son has since had to leave for college, but before leaving, he made sure the site was running. - -You need to help out Jeff and perform appropriate configuration for security. Below is the current situation: - -![juiceshop configuration diagram](https://cdn.qwiklabs.com/qEwFTP7%2FkyF3cRwfT3FGObt7L7VLB60%2Bvp92hZVnogw%3D) - -## Your challenge - -You need to configure this simple environment securely. Your first challenge is to set up appropriate firewall rules and virtual machine tags. You also need to ensure that SSH is only available to the bastion via IAP. - -For the firewall rules, make sure: - -- The bastion host does not have a public IP address. -- You can only SSH to the bastion and only via IAP. -- You can only SSH to juice-shop via the bastion. -- Only HTTP is open to the world for `juice-shop`. - -Tips and tricks: - -- Pay close attention to the network tags and the associated VPC firewall rules. -- Be specific and limit the size of the VPC firewall rule source ranges. -- Overly permissive permissions will not be marked correct. - -![juiceshop configuration diagram 2](https://cdn.qwiklabs.com/BgxgsuLyqMkhxmO3jDlkHE7yGLIR%2B3rrUabKimlgrbo%3D) - -Suggested order of actions: - -1. Check the firewall rules. Remove the overly permissive rules. - - ```bash - gcloud compute firewall-rules delete open-access - ``` - - Press `y` and `enter` to confirm. - -2. Navigate to Compute Engine in the Cloud Console (**Navigation menu** > **Compute Engine** > **VM Instance**) and identify the bastion host. The instance should be stopped. Start the instance. - - ```bash - gcloud compute instances start bastion --zone=$ZONE - ``` - - If you getting **_error_** when run this command, you can manually activate bastion in VM Instance. - -3. The bastion host is the one machine authorized to receive external SSH traffic. Create a firewall rule that allows [SSH (tcp/22) from the IAP service](https://cloud.google.com/iap/docs/using-tcp-forwarding). The firewall rule must be enabled for the bastion host instance using a network tag of `SSH_IAP_NETWORK_TAG`. - - ```bash - gcloud compute firewall-rules create ssh-ingress --allow=tcp:22 --source-ranges 35.235.240.0/20 --target-tags $IAP_NETWORK_TAG --network acme-vpc - - gcloud compute instances add-tags bastion --tags=$IAP_NETWORK_TAG --zone=$ZONE - ``` - -4. The `juice-shop` server serves HTTP traffic. Create a firewall rule that allows traffic on HTTP (tcp/80) to any address. The firewall rule must be enabled for the juice-shop instance using a network tag of `HTTP_NETWORK_TAG`. - - ```bash - gcloud compute firewall-rules create http-ingress --allow=tcp:80 --source-ranges 0.0.0.0/0 --target-tags $HTTP_NETWORK_TAG --network acme-vpc - - gcloud compute instances add-tags juice-shop --tags=$HTTP_NETWORK_TAG --zone=$ZONE - ``` - -5. You need to connect to `juice-shop` from the bastion using SSH. Create a firewall rule that allows traffic on SSH (tcp/22) from `acme-mgmt-subnet` network address. The firewall rule must be enabled for the `juice-shop` instance using a network tag of `SSH_INTERNAL_NETWORK_TAG`. - - ```bash - gcloud compute firewall-rules create internal-ssh-ingress --allow=tcp:22 --source-ranges 192.168.10.0/24 --target-tags $INTERNAL_NETWORK_TAG --network acme-vpc - - gcloud compute instances add-tags juice-shop --tags=$INTERNAL_NETWORK_TAG --zone=$ZONE - ``` - -6. In the Compute Engine instances page, click the SSH button for the **bastion** host. - - ![SSH to bastion](./images/vm_instances.webp#center) - - Once connected, SSH to `juice-shop`. - - ```bash - gcloud compute ssh juice-shop --internal-ip - ``` - - When prompted `Do you want to continue (Y/n)?`, press `y` and `enter`. - - Then create a phrase key for the `juice-shop` instance. You can just press `enter` for the empty passphrase. - - When prompted `Did you mean zone [us-east1-b] for instance: [juice-shop] (Y/n)?`, press `y` and `enter`. - - ![SSH to juice-shop](./images/bastion_ssh.webp#center) - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/e8f4BCFobRlvdqoJ1D%2BHGeJeS9yToL4ZVT3Tg6oeg7Y%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP341/images/year.webp b/content/writeups/google-cloudskillsboost/GSP341/images/year.webp deleted file mode 100644 index 85be8e0..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP341/images/year.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP341/index.md b/content/writeups/google-cloudskillsboost/GSP341/index.md deleted file mode 100644 index 3a00860..0000000 --- a/content/writeups/google-cloudskillsboost/GSP341/index.md +++ /dev/null @@ -1,234 +0,0 @@ ---- -title: "[GSP341] Create ML Models with BigQuery ML: Challenge Lab" -description: "" -summary: "Quest: Create ML Models with BigQuery ML" -date: 2023-05-26T03:41:15+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp341", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "machine-learning", "bigquery"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 11 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP341/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP341 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 30 minutes
-- Difficulty: Intermediate
-- Price: 7 Credits - -Lab: [GSP341](https://www.cloudskillsboost.google/focuses/14294?parent=catalog)
-Quest: [Create ML Models with BigQuery ML](https://www.cloudskillsboost.google/quests/146)
- -## Challenge lab scenario - -You have started a new role as a junior member of the Data Science department Jooli Inc. Your team is working on a number of machine learning initiatives related to urban mobility services. You are expected to help with the development and assessment of data sets and machine learning models to help provide insights based on real work data sets. - -You are expected to have the skills and knowledge for these tasks, so don't expect step-by-step guides to be provided. - -## Your challenge - -One of the projects you are working on needs to provide analysis based on real world data that will help in the selection of new bicycle models for public bike share systems. Your role in this project is to develop and evaluate machine learning models that can predict average trip durations for bike schemes using the public data from Austin's public bike share scheme to train and evaluate your models. - -Two of the senior data scientists in your team have different theories on what factors are important in determining the duration of a bike share trip and you have been asked to prioritise these to start. The first data scientist maintains that the key factors are the start station, the location of the start station, the day of the week and the hour the trip started. While the second data scientist argues that this is an over complication and the key factors are simply start station, subscriber type, and the hour the trip started. - -You have been asked to develop a machine learning model based on each of these input features. Given the fact that stay-at-home orders were in place for Austin during parts of 2021 as a result of COVID-19 you will be working on data from previous years. You have been instructed to train your models on data from `Training Year` and then evaluate them against data from `Evaluation Year` on the basis of Mean Absolute Error and the square root of Mean Squared Error. - -You can access the public data for the Austin bike share scheme in your project by opening [this link to the Austin bike share dataset](https://console.cloud.google.com/bigquery?p=bigquery-public-data&d=austin_bikeshare&page=dataset) in the browser tab for your lab. - -As a final step you must create and run a query that uses the model that includes subscriber type as a feature, to predict the average trip duration for all trips from the busiest bike sharing station in `Evaluation Year` (based on the number of trips per station in `Evaluation Year`) where the subscriber type is 'Single Trip'. - -## Setup - -```bash -gcloud auth list - -gcloud config list project -``` - -### Task 1. Create a dataset to store your machine learning models - -- Create a new dataset in which you can store your machine learning models. - -Go to your cloud shell and run the following command to create the model: - -```bash -bq mk austin -``` - -### Task 2. Create a forecasting BigQuery machine learning model - -- Create the first machine learning model to predict the trip duration for bike trips. - -The features of this model must incorporate the starting station name, the hour the trip started, the weekday of the trip, and the address of the start station labeled as `location`. You must use `Training Year` data only to train this model. - -Go to BigQuery to make the first model and run the following query: - -Replace `<****Training_Year****>` with the year you are using for training. - -The year in your lab variable looks like this: - -![year](./images/year.webp#center) - -```sql -CREATE OR REPLACE MODEL austin.location_model -OPTIONS - (model_type='linear_reg', labels=['duration_minutes']) AS -SELECT - start_station_name, - EXTRACT(HOUR FROM start_time) AS start_hour, - EXTRACT(DAYOFWEEK FROM start_time) AS day_of_week, - duration_minutes, - address as location -FROM - `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips -JOIN - `bigquery-public-data.austin_bikeshare.bikeshare_stations` AS stations -ON - trips.start_station_name = stations.name -WHERE - EXTRACT(YEAR FROM start_time) = <****Training_Year****> - AND duration_minutes > 0 -``` - -### Task 3. Create the second machine learning model - -- Create the second machine learning model to predict the trip duration for bike trips. - -The features of this model must incorporate the starting station name, the bike share subscriber type and the start time for the trip. You must also use `Training Year` data only to train this model. - -Go to BigQuery to make the second model and run the following query: - -Replace `<****Training_Year****>` with the year you are using for training. - -```sql -CREATE OR REPLACE MODEL austin.subscriber_model -OPTIONS - (model_type='linear_reg', labels=['duration_minutes']) AS -SELECT - start_station_name, - EXTRACT(HOUR FROM start_time) AS start_hour, - subscriber_type, - duration_minutes -FROM `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips -WHERE EXTRACT(YEAR FROM start_time) = <****Training_Year****> -``` - -### Task 4. Evaluate the two machine learning models - -- Evaluate each of the machine learning models against `Evaluation Year` data only using separate queries. - -Your queries must report both the Mean Absolute Error and the Root Mean Square Error. - -Go to BigQuery and run the following query: - -Replace `<****Evaluation_Year****>` with the year you are using for evaluating. - -```sql -SELECT - SQRT(mean_squared_error) AS rmse, - mean_absolute_error -FROM - ML.EVALUATE(MODEL austin.location_model, ( - SELECT - start_station_name, - EXTRACT(HOUR FROM start_time) AS start_hour, - EXTRACT(DAYOFWEEK FROM start_time) AS day_of_week, - duration_minutes, - address as location - FROM - `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips - JOIN - `bigquery-public-data.austin_bikeshare.bikeshare_stations` AS stations - ON - trips.start_station_name = stations.name - WHERE EXTRACT(YEAR FROM start_time) = <****Evaluation_Year****> ) -) -``` - -```sql -SELECT - SQRT(mean_squared_error) AS rmse, - mean_absolute_error -FROM - ML.EVALUATE(MODEL austin.subscriber_model, ( - SELECT - start_station_name, - EXTRACT(HOUR FROM start_time) AS start_hour, - subscriber_type, - duration_minutes - FROM - `bigquery-public-data.austin_bikeshare.bikeshare_trips` AS trips - WHERE - EXTRACT(YEAR FROM start_time) = <****Evaluation_Year****>) -) -``` - -### Task 5. Use the subscriber type machine learning model to predict average trip durations - -- When both models have been created and evaluated, use the second model, that uses `subscriber_type` as a feature, to predict average trip length for trips from the busiest bike sharing station in `Evaluation Year` where the subscriber type is `Single Trip`. - -Go to BigQuery and run the following query: - -Replace `<****Evaluation_Year****>` with the year you are using for evaluating. - -```sql -SELECT - start_station_name, - COUNT(*) AS trips -FROM - `bigquery-public-data.austin_bikeshare.bikeshare_trips` -WHERE - EXTRACT(YEAR FROM start_time) = <****Evaluation_Year****> -GROUP BY - start_station_name -ORDER BY - trips DESC -``` - -```sql -SELECT AVG(predicted_duration_minutes) AS average_predicted_trip_length -FROM ML.predict(MODEL austin.subscriber_model, ( -SELECT - start_station_name, - EXTRACT(HOUR FROM start_time) AS start_hour, - subscriber_type, - duration_minutes -FROM - `bigquery-public-data.austin_bikeshare.bikeshare_trips` -WHERE - EXTRACT(YEAR FROM start_time) = <****Evaluation_Year****> - AND subscriber_type = 'Single Trip' - AND start_station_name = '21st & Speedway @PCL')) -``` - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/XHgD9wRAAlXktQmoNrUOvbg38ZBrazddtSoYHS55d8o%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP342/images/lab_variable.webp b/content/writeups/google-cloudskillsboost/GSP342/images/lab_variable.webp deleted file mode 100644 index a41b2f9..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP342/images/lab_variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP342/index.md b/content/writeups/google-cloudskillsboost/GSP342/index.md deleted file mode 100644 index 6bb0eb3..0000000 --- a/content/writeups/google-cloudskillsboost/GSP342/index.md +++ /dev/null @@ -1,215 +0,0 @@ ---- -title: "[GSP342] Ensure Access & Identity in Google Cloud: Challenge Lab" -description: "" -summary: "Quest: Ensure Access & Identity in Google Cloud" -date: 2023-05-26T01:05:15+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp342", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "kubernetes", "container", "iam"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 12 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "" # image path/url - alt: "" # alt text - caption: "" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP342/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP342 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 30 minutes
-- Difficulty: Intermediate
-- Price: 5 Credits - -Lab: [GSP342](https://www.cloudskillsboost.google/focuses/14572?parent=catalog)
-Quest: [Ensure Access & Identity in Google Cloud](https://www.cloudskillsboost.google/quests/150)
- -## Challenge scenario - -You have started a new role as a junior member of the security team for the Orca team in Jooli Inc. Your team is responsible for ensuring the security of the Cloud infrastucture and services that the company's applications depend on. - -You are expected to have the skills and knowledge for these tasks, so don't expect step-by-step guides to be provided. - -## Your challenge - -You have been asked to deploy, configure, and test a new Kubernetes Engine cluster that will be used for application development and pipeline testing by the the Orca development team. - -As per the organisation's security standards you must ensure that the new Kubernetes Engine cluster is built according to the organisation's most recent security standards and thereby must comply with the following: - -- The cluster must be deployed using a dedicated service account configured with the least privileges required. -- The cluster must be deployed as a Kubernetes Engine private cluster, with the public endpoint disabled, and the master authorized network set to include only the ip-address of the Orca group's management jumphost. -- The Kubernetes Engine private cluster must be deployed to the `orca-build-subnet` in the Orca Build VPC. - -From a previous project you know that the minimum permissions required by the service account that is specified for a Kubernetes Engine cluster is covered by these three built in roles: - -- `roles/monitoring.viewer` -- `roles/monitoring.metricWriter` -- `roles/logging.logWriter` - -These roles are specified in the Google Kubernetes Engine (GKE)'s Harden your cluster's security guide in the [Use least privilege Google service accounts](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#use_least_privilege_sa) section. - -You must bind the above roles to the service account used by the cluster as well as a custom role that you must create in order to provide access to any other services specified by the development team. Initially you have been told that the development team requires that the service account used by the cluster should have the permissions necessary to add and update objects in Google Cloud Storage buckets. To do this you will have to create a new custom IAM role that will provide the following permissions: - -- `storage.buckets.get` -- `storage.objects.get` -- `storage.objects.list` -- `storage.objects.update` -- `storage.objects.create` - -Once you have created the new private cluster you must test that it is correctly configured by connecting to it from the jumphost, `orca-jumphost`, in the management subnet `orca-mgmt-subnet`. As this compute instance is not in the same subnet as the private cluster you must make sure that the master authorized networks for the cluster includes the internal ip-address for the instance, and you must specify the `--internal-ip` flag when retrieving cluster credentials using the `gcloud container clusters get-credentials` command. - -All new cloud objects and services that you create should include the "orca-" prefix. - -Your final task is to validate that the cluster is working correctly by deploying a simple application to the cluster to test that management access to the cluster using the `kubectl` tool is working from the `orca-jumphost` compute instance. - -## Setup - -Define variables: - -```bash -export CUSTOM_SECURIY_ROLE= -export SERVICE_ACCOUNT= -export CLUSTER_NAME= -``` - -for example, in my case: - -![lab variable](./images/lab_variable.webp#center) - -```bash -export CUSTOM_SECURIY_ROLE=orca_storage_editor_923 -export SERVICE_ACCOUNT=orca-private-cluster-278-sa -export CLUSTER_NAME=orca-cluster-995 -``` - -### Task 1. Create a custom security role. - -Set the default zone to `us-east1-b` and create `role-definition.yaml` file. - -```bash -gcloud config set compute/zone us-east1-b -``` - -Create `role-definition.yaml` file. - -```bash -cat < role-definition.yaml -title: "" -description: "<DESCRIPTION>" -stage: "ALPHA" -includedPermissions: -- storage.buckets.get -- storage.objects.get -- storage.objects.list -- storage.objects.update -- storage.objects.create -EOF -``` - -Replace `<TITLE>` and `<DESCRIPTION>` with the variables using [sed](https://linux.die.net/man/1/sed) command. - -```bash -sed -i "s/<TITLE>/$CUSTOM_SECURIY_ROLE/g" role-definition.yaml -sed -i "s/<DESCRIPTION>/Permission/g" role-definition.yaml -``` - -Create a custom security role - -```bash -gcloud iam roles create $CUSTOM_SECURIY_ROLE --project $DEVSHELL_PROJECT_ID --file role-definition.yaml -``` - -### Task 2. Create a service account. - -```bash -gcloud iam service-accounts create $SERVICE_ACCOUNT --display-name "${SERVICE_ACCOUNT} Service Account" -``` - -### Task 3. Bind a custom security role to a service account. - -```bash -gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.viewer - -gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.metricWriter - -gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/logging.logWriter - -gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID --member serviceAccount:$SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role projects/$DEVSHELL_PROJECT_ID/roles/$CUSTOM_SECURIY_ROLE -``` - -### Task 4. Create and configure a new Kubernetes Engine private cluster - -```bash -gcloud config set compute/zone us-east1-b - -gcloud container clusters create $CLUSTER_NAME --num-nodes 1 --master-ipv4-cidr=172.16.0.64/28 --network orca-build-vpc --subnetwork orca-build-subnet --enable-master-authorized-networks --master-authorized-networks 192.168.10.2/32 --enable-ip-alias --enable-private-nodes --enable-private-endpoint --service-account $SERVICE_ACCOUNT@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --zone us-east1-b -``` - -### Task 5. Deploy an application to a private Kubernetes Engine cluster. - -Connect to the `orca-jumphost` compute instance (SSH). - -```bash -gcloud compute ssh --zone "us-east1-b" "orca-jumphost" -``` - -Define variables: - -```bash -export CUSTOM_SECURIY_ROLE= -export SERVICE_ACCOUNT= -export CLUSTER_NAME= -``` - -for example, in my case: - -![lab variable](./images/lab_variable.webp#center) - -```bash -export CUSTOM_SECURIY_ROLE=orca_storage_editor_923 -export SERVICE_ACCOUNT=orca-private-cluster-278-sa -export CLUSTER_NAME=orca-cluster-995 -``` - -Install the [gcloud auth plugin for Kubernetes](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke): - -```bash -sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin -``` - -Create and expose a deployment in Kubernetes: - -```bash -gcloud container clusters get-credentials $CLUSTER_NAME --zone=us-east1-b --internal-ip - -kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 - -kubectl expose deployment hello-server --name orca-hello-service --type LoadBalancer --port 80 --target-port 8080 -``` - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/H1yrV5fK7wntMMH8epHGm%2FfXhK59czV8mEuoTxFfi2o%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP345/images/Instance_ID.webp b/content/writeups/google-cloudskillsboost/GSP345/images/Instance_ID.webp deleted file mode 100644 index 8c1855e..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP345/images/Instance_ID.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP345/index.md b/content/writeups/google-cloudskillsboost/GSP345/index.md deleted file mode 100644 index 0937a28..0000000 --- a/content/writeups/google-cloudskillsboost/GSP345/index.md +++ /dev/null @@ -1,542 +0,0 @@ ---- -title: "[GSP345] Automating Infrastructure on Google Cloud with Terraform: Challenge Lab" -description: "" -summary: "Quest: Automating Infrastructure on Google Cloud with Terraform" -date: 2023-05-19T07:02:15+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp345", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "terraform", "automation", "infrastructure", "vpc", "firewall"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 13 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "<image path/url>" # image path/url - alt: "<alt text>" # alt text - caption: "<text>" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP345/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP345 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour 30 minutes<br> -- Difficulty: Introductory<br> -- Price: 1 Credit - -Lab: [GSP345](https://www.cloudskillsboost.google/focuses/42740?parent=catalog)<br> -Quest: [Automating Infrastructure on Google Cloud with Terraform](https://www.cloudskillsboost.google/quests/159)<br> - -## Challenge scenario - -You are a cloud engineer intern for a new startup. For your first project, your new boss has tasked you with creating infrastructure in a quick and efficient manner and generating a mechanism to keep track of it for future reference and changes. You have been directed to use [Terraform](https://www.terraform.io/) to complete the project. - -For this project, you will use Terraform to create, deploy, and keep track of infrastructure on the startup's preferred provider, Google Cloud. You will also need to import some mismanaged instances into your configuration and fix them. - -In this lab, you will use Terraform to import and create multiple VM instances, a VPC network with two subnetworks, and a firewall rule for the VPC to allow connections between the two instances. You will also create a Cloud Storage bucket to host your remote backend. - - -### Task 1. Create the configuration files - -1. Make the empty files and directories in Cloud Shell or the Cloud Shell Editor. - - ```bash - touch main.tf - touch variables.tf - mkdir modules - cd modules - mkdir instances - cd instances - touch instances.tf - touch outputs.tf - touch variables.tf - cd .. - mkdir storage - cd storage - touch storage.tf - touch outputs.tf - touch variables.tf - cd - ``` - - Folder structure should look like this: - - ```bash - main.tf - variables.tf - modules/ - └── instances - ├── instances.tf - ├── outputs.tf - └── variables.tf - └── storage - ├── storage.tf - ├── outputs.tf - └── variables.tf - ``` - -2. Add the following to the each variables.tf file, and replace `PROJECT_ID` with your GCP Project ID, also change the `REGION` and the `ZONE` based on the lab instructions. - - ```terraform - variable "region" { - default = "<****us-central1****>" - } - - variable "zone" { - default = "<****us-central1-a****>" - } - - variable "project_id" { - default = "<****PROJECT_ID****>" - } - ``` - -3. Add the following to the `main.tf` file. - - ```terraform - terraform { - required_providers { - google = { - source = "hashicorp/google" - version = "4.53.0" - } - } - } - - provider "google" { - project = var.project_id - region = var.region - zone = var.zone - } - - module "instances" { - source = "./modules/instances" - } - ``` - -4. Run the following commands in Cloud Shell in the root directory to initialize terraform. - - ```bash - terraform init - ``` - -### Task 2. Import infrastructure - -1. In the Cloud Console, go to the **Navigation menu** and select **Compute Engine**. -2. Click the `tf-instance-1`, then copy the **Instance ID** down somewhere to use later. - ![Instance ID](./images/Instance_ID.webp#center) -3. In the Cloud Console, go to the **Navigation menu** and select **Compute Engine**. -4. Do the same thing on previous step, click the `tf-instance-2`, then copy the **Instance ID** down somewhere to use later. -5. Next, navigate to `modules/instances/instances.tf`. Copy the following configuration into the file. - - ```terraform - resource "google_compute_instance" "tf-instance-1" { - name = "tf-instance-1" - machine_type = "n1-standard-1" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - - resource "google_compute_instance" "tf-instance-2" { - name = "tf-instance-2" - machine_type = "n1-standard-1" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - ``` - -6. Run the following commands in Cloud Shell to import the first instance. Replace `INSTANCE_ID_1` with **Instance ID** for `tf-instance-1` you copied down earlier. - - ```bash - terraform import module.instances.google_compute_instance.tf-instance-1 <****INSTANCE_ID_1****> - ``` - -7. Run the following commands in Cloud Shell to import the first instance. Replace `INSTANCE_ID_2` with **Instance ID** for `tf-instance-2` you copied down earlier. - - ```bash - terraform import module.instances.google_compute_instance.tf-instance-2 <****INSTANCE_ID_2****> - ``` - -8. Run the following commands to apply your changes. - - ```bash - terraform plan - - terraform apply - ``` - -### Task 3. Configure a remote backend - -1. Add the following code to the `modules/storage/storage.tf` file. Replace `BUCKET_NAME` with bucket name given in lab instructions. - - ```terraform - resource "google_storage_bucket" "storage-bucket" { - name = "<****BUCKET_NAME****>" - location = "US" - force_destroy = true - uniform_bucket_level_access = true - } - ``` - -2. Next, add the following to the `main.tf` file. - - ```terraform - module "storage" { - source = "./modules/storage" - } - ``` - -3. Run the following commands to initialize the module and create the storage bucket resource. Type `yes` at the dialogue after you run the apply command to accept the state changes. - - ```bash - terraform init - - terraform apply - ``` - -4. Next, update the `main.tf` file so that the terraform block looks like the following. Fill in your GCP Project ID for the bucket argument definition. Replace `BUCKET_NAME` with Bucket Name given in lab instructions. - - ```terraform - terraform { - backend "gcs" { - bucket = "<****BUCKET_NAME****>" - prefix = "terraform/state" - } - - required_providers { - google = { - source = "hashicorp/google" - version = "4.53.0" - } - } - } - ``` - -5. Run the following commands to initialize the remote backend. Type `yes` at the prompt. - - ```bash - terraform init - ``` - -### Task 4. Modify and update infrastructure - -1. Navigate to `modules/instances/instance.tf`. Replace the entire contents of the file with the following, then replace `INSTANCE_NAME` with instance name given in lab instructions. - - ```terraform - resource "google_compute_instance" "tf-instance-1" { - name = "tf-instance-1" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - - resource "google_compute_instance" "tf-instance-2" { - name = "tf-instance-2" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - - resource "google_compute_instance" "<****INSTANCE_NAME****>" { - name = "<****INSTANCE_NAME****>" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - ``` - -2. Run the following commands to initialize the module and create/update the instance resources. Type `yes` at the dialogue after you run the apply command to accept the state changes. - - ```bash - terraform init - - terraform apply - ``` - -### Task 5. Destroy resources - -1. Taint the `INSTANCE_NAME` resource by running the following command. - - ```bash - terraform taint module.instances.google_compute_instance.<****INSTANCE_NAME****> - ``` - -2. Run the following commands to apply the changes. - - ```bash - terraform init - - terraform apply - ``` - -3. Remove the `INSTANCE_NAME` (instance 3) resource from the `instances.tf` file. Delete the following code chunk from the file. - - ```terraform - resource "google_compute_instance" "<****INSTANCE_NAME****>" { - name = "<****INSTANCE_NAME****>" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "default" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - ``` - -4. Run the following commands to apply the changes. Type yes at the prompt. - - ```bash - terraform apply - ``` - -### Task 6. Use a module from the Registry - -1. Copy and paste the following into the `main.tf` file. Replace `VPC_NAME` with VPC Name given in lab instructions. - - ```terraform - module "vpc" { - source = "terraform-google-modules/network/google" - version = "~> 6.0.0" - - project_id = var.project_id - network_name = "<****VPC_NAME****>" - routing_mode = "GLOBAL" - - subnets = [ - { - subnet_name = "subnet-01" - subnet_ip = "10.10.10.0/24" - subnet_region = var.region - }, - { - subnet_name = "subnet-02" - subnet_ip = "10.10.20.0/24" - subnet_region = var.region - subnet_private_access = "true" - subnet_flow_logs = "true" - description = "This subnet has a description" - } - ] - } - ``` - -2. Run the following commands to initialize the module and create the VPC. Type `yes` at the prompt. - - ```bash - terraform init - - terraform apply - ``` - -3. Navigate to `modules/instances/instances.tf`. Replace the entire contents of the file with the following. Replace `VPC_NAME` with VPC Name given in lab instructions. - - ```terraform - resource "google_compute_instance" "tf-instance-1" { - name = "tf-instance-1" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "<****VPC_NAME****>" - subnetwork = "subnet-01" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - - resource "google_compute_instance" "tf-instance-2" { - name = "tf-instance-2" - machine_type = "n1-standard-2" - zone = var.zone - - boot_disk { - initialize_params { - image = "debian-cloud/debian-10" - } - } - - network_interface { - network = "<****VPC_NAME****>" - subnetwork = "subnet-02" - } - - metadata_startup_script = <<-EOT - #!/bin/bash - EOT - allow_stopping_for_update = true - } - - module "vpc" { - source = "terraform-google-modules/network/google" - version = "~> 6.0.0" - - project_id = "*****PROJECT_ID****" - network_name = "****VPC_NAME*****" - routing_mode = "GLOBAL" - - subnets = [ - { - subnet_name = "subnet-01" - subnet_ip = "10.10.10.0/24" - subnet_region = "us-central1" - }, - { - subnet_name = "subnet-02" - subnet_ip = "10.10.20.0/24" - subnet_region = "us-central1" - subnet_private_access = "true" - subnet_flow_logs = "true" - description = "This subnet has a description" - }, - ] - } - ``` - -4. Run the following commands to initialize the module and update the instances. Type `yes` at the prompt. - - ```bash - terraform init - - terraform apply - ``` - -### Task 7. Configure a firewall - -1. Add the following resource to the `main.tf` file and replace `PROJECT_ID` and `VPC_NAME` with your GCP Project ID and VPC Name given in lab instructions. - - ```terraform - resource "google_compute_firewall" "tf-firewall" { - name = "tf-firewall" - network = "projects/<****PROJECT_ID****>/global/networks/<****VPC_NAME****>" - - allow { - protocol = "tcp" - ports = ["80"] - } - - source_tags = ["web"] - source_ranges = ["0.0.0.0/0"] - } - ``` - -2. Run the following commands to configure the firewall. Type `yes` at the prompt. - - ```bash - terraform init - - terraform apply - ``` - -## Congratulations! - -![Badge](https://cdn.qwiklabs.com/RGaT7KirRAjGDJTaOTOuax2BzYId0zvvGTs%2BPpGlcQI%3D#center) diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/date variable.webp b/content/writeups/google-cloudskillsboost/GSP787/images/date variable.webp deleted file mode 100644 index 1fe8f7f..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/date variable.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/deaths.webp b/content/writeups/google-cloudskillsboost/GSP787/images/deaths.webp deleted file mode 100644 index c486a45..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/deaths.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/limit.webp b/content/writeups/google-cloudskillsboost/GSP787/images/limit.webp deleted file mode 100644 index e331edc..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/limit.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/looker_date.webp b/content/writeups/google-cloudskillsboost/GSP787/images/looker_date.webp deleted file mode 100644 index 6683924..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/looker_date.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/month.webp b/content/writeups/google-cloudskillsboost/GSP787/images/month.webp deleted file mode 100644 index 3272880..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/month.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/percentage.webp b/content/writeups/google-cloudskillsboost/GSP787/images/percentage.webp deleted file mode 100644 index 6990fc2..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/percentage.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/images/start_close_date.webp b/content/writeups/google-cloudskillsboost/GSP787/images/start_close_date.webp deleted file mode 100644 index 57614ac..0000000 Binary files a/content/writeups/google-cloudskillsboost/GSP787/images/start_close_date.webp and /dev/null differ diff --git a/content/writeups/google-cloudskillsboost/GSP787/index.md b/content/writeups/google-cloudskillsboost/GSP787/index.md deleted file mode 100644 index fdefc17..0000000 --- a/content/writeups/google-cloudskillsboost/GSP787/index.md +++ /dev/null @@ -1,452 +0,0 @@ ---- -title: "[GSP787] Insights from Data with BigQuery: Challenge Lab" -description: "" -summary: "Quest: Insights from Data with BigQuery" -date: 2023-05-20T21:01:15+07:00 -draft: false -author: "Hiiruki" # ["Me", "You"] # multiple authors -tags: ["writeups", "challenge", "google-cloudskillsboost", "gsp787", "google-cloud", "cloudskillsboost", "juaragcp", "google-cloud-platform", "gcp", "cloud-computing", "bigquery", "sql"] -canonicalURL: "" -showToc: true -TocOpen: false -TocSide: 'right' # or 'left' -weight: 14 -# aliases: ["/first"] -hidemeta: false -comments: false -disableHLJS: true # to disable highlightjs -disableShare: true -hideSummary: false -searchHidden: false -ShowReadingTime: true -ShowBreadCrumbs: true -ShowPostNavLinks: true -ShowWordCount: true -ShowRssButtonInSectionTermList: true -# UseHugoToc: true -cover: - image: "<image path/url>" # image path/url - alt: "<alt text>" # alt text - caption: "<text>" # display caption under cover - relative: false # when using page bundles set this to true - hidden: true # only hide on current single page -# editPost: -# URL: "https://github.com/hiiruki/hiiruki.dev/blob/main/content/writeups/google-cloudskillsboost/GSP787/index.md" -# Text: "Suggest Changes" # edit text -# appendFilePath: true # to append file path to Edit link ---- - -### GSP787 - -![Lab Banner](https://cdn.qwiklabs.com/GMOHykaqmlTHiqEeQXTySaMXYPHeIvaqa2qHEzw6Occ%3D#center) - -- Time: 1 hour<br> -- Difficulty: Intermediate<br> -- Price: 5 Credits - -Lab: [GSP787](https://www.cloudskillsboost.google/focuses/14294?parent=catalog)<br> -Quest: [Insights from Data with BigQuery](https://www.cloudskillsboost.google/quests/123)<br> - -## Challenge lab scenario - -You're part of a public health organization which is tasked with identifying answers to queries related to the Covid-19 pandemic. Obtaining the right answers will help the organization in planning and focusing healthcare efforts and awareness programs appropriately. - -The dataset and table that will be used for this analysis will be : `bigquery-public-data.covid19_open_data.covid19_open_data`. This repository contains country-level datasets of daily time-series data related to COVID-19 globally. It includes data relating to demographics, economy, epidemiology, geography, health, hospitalizations, mobility, government response, and weather. - -### Task 1. Total confirmed cases - -- Build a query that will answer "What was the total count of confirmed cases on `Date`?" The query needs to return a single row containing the sum of confirmed cases across all countries. The name of the column should be **total_cases_worldwide**. - -Columns to reference: - -- cumulative_confirmed -- date - -Go to BigQuery and run the following query: - -Change the `date` based on the lab instructions. - -![Date Variable](./images/date%20variable.webp#center) - -```sql -SELECT sum(cumulative_confirmed) as total_cases_worldwide -FROM `bigquery-public-data.covid19_open_data.covid19_open_data` -WHERE date=<****change date eg '2020-05-15'****> -``` - -Mine is `May, 15 2020`. So, I will change the date to `2020-05-15`. - -example: - -```sql -SELECT sum(cumulative_confirmed) as total_cases_worldwide -FROM `bigquery-public-data.covid19_open_data.covid19_open_data` -WHERE date='2020-05-15' -``` - -### Task 2. Worst affected areas - -- Build a query for answering "How many states in the US had more than `Death Count` deaths on `Date`?" The query needs to list the output in the field **count_of_states**. - -> **Note**: Don't include NULL values. - -Columns to reference: - -- country_name -- subregion1_name (for state information) -- cumulative_deceased - -Go to BigQuery and run the following query: - -Change the `date` and `death_count` based on the lab instructions. - -```sql -with deaths_by_states as ( - SELECT subregion1_name as state, sum(cumulative_deceased) as death_count - FROM `bigquery-public-data.covid19_open_data.covid19_open_data` - where country_name="United States of America" and date=<****change date eg '2020-05-15'****> and subregion1_name is NOT NULL - group by subregion1_name -) -select count(*) as count_of_states -from deaths_by_states -where death_count > <****change death count here****> -``` - -Mine is `250` deaths. So, I will change the `death_count` to `250`. - -![Date and Death Count Variable](./images/deaths.webp#center) - -example: - -```sql -with deaths_by_states as ( - SELECT subregion1_name as state, sum(cumulative_deceased) as death_count - FROM `bigquery-public-data.covid19_open_data.covid19_open_data` - where country_name="United States of America" and date='2020-05-15' and subregion1_name is NOT NULL - group by subregion1_name -) -select count(*) as count_of_states -from deaths_by_states -where death_count > 250 -``` - -### Task 3. Identifying hotspots - -- Build a query that will answer "List all the states in the United States of America that had more than `Confirmed Cases` confirmed cases on `Date`?" The query needs to return the State Name and the corresponding confirmed cases arranged in descending order. Name of the fields to return state and **total_confirmed_cases**. - -Columns to reference: - -- country_code -- subregion1_name (for state information) -- cumulative_confirmed - -Go to BigQuery and run the following query: - -```sql -SELECT * FROM ( - SELECT subregion1_name as state, sum(cumulative_confirmed) as total_confirmed_cases - FROM `bigquery-public-data.covid19_open_data.covid19_open_data` - WHERE country_code="US" AND date=<****change date eg '2020-05-15'****> AND subregion1_name is NOT NULL - GROUP BY subregion1_name - ORDER BY total_confirmed_cases DESC -) -WHERE total_confirmed_cases > <****change confirmed case here****> -``` - -### Task 4. Fatality ratio - -1. Build a query that will answer "What was the case-fatality ratio in Italy for the month of Month 2020?" Case-fatality ratio here is defined as (total deaths / total confirmed cases) * 100. - -2. Write a query to return the ratio for the month of Month 2020 and contain the following fields in the output: total_confirmed_cases, total_deaths, case_fatality_ratio. - -Columns to reference: - -- country_name -- cumulative_confirmed -- cumulative_deceased - -Go to BigQuery and run the following query: - -```sql -SELECT sum(cumulative_confirmed) as total_confirmed_cases, sum(cumulative_deceased) as total_deaths, (sum(cumulative_deceased)/sum(cumulative_confirmed))*100 as case_fatality_ratio -FROM `bigquery-public-data.covid19_open_data.covid19_open_data` -where country_name="Italy" AND date BETWEEN <****change month here '2020-06-01'****> and <****change month here '2020-06-30'****> -``` - -Change the `month` based on the lab instructions. - -![Month Variable](./images/month.webp#center) - -Mine is `June, 2020`. So, I will change the month to `2020-06-01` and `2020-06-30`. - -example: - -```sql -SELECT sum(cumulative_confirmed) as total_confirmed_cases, sum(cumulative_deceased) as total_deaths, (sum(cumulative_deceased)/sum(cumulative_confirmed))*100 as case_fatality_ratio -FROM `bigquery-public-data.covid19_open_data.covid19_open_data` -where country_name="Italy" AND date BETWEEN '2020-06-01' and '2020-06-30' -``` - -### Task 5. Identifying specific day - -- Build a query that will answer: "On what day did the total number of deaths cross `Death count in Italy` in Italy?" The query should return the date in the format **yyyy-mm-dd**. - -Columns to reference: - -- country_name -- cumulative_deceased - -Go to BigQuery and run the following query: - -```sql -SELECT date -FROM `bigquery-public-data.covid19_open_data.covid19_open_data` -where country_name="Italy" and cumulative_deceased> <****change the value of death cross****> -order by date asc -limit 1 -``` - -### Task 6. Finding days with zero net new cases - -The following query is to identify the number of days in India between `Start date in India` and `Close date in India` when there were zero increases in the number of confirmed cases. - -Go to BigQuery and run the following query: - -```sql -WITH india_cases_by_date AS ( - SELECT - date, - SUM( cumulative_confirmed ) AS cases - FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` - WHERE - country_name ="India" - AND date between < ****change the date here'2020-02-21'****> and <****change the date here'2020-03-15'****> - GROUP BY - date - ORDER BY - date ASC - ) -, india_previous_day_comparison AS -(SELECT - date, - cases, - LAG(cases) OVER(ORDER BY date) AS previous_day, - cases - LAG(cases) OVER(ORDER BY date) AS net_new_cases -FROM india_cases_by_date -) -select count(*) -from india_previous_day_comparison -where net_new_cases=0 -``` - -Change the `start date` in India and `close date` in India based on the lab instructions. - -![Start Date and Close Date](./images/start_close_date.webp#center) - -Mine is `25, Feb 2020` and `10, March 2020`. So, I will change the date to `2020-02-25` and `2020-03-10`. - -example: - -```sql -WITH india_cases_by_date AS ( - SELECT - date, - SUM( cumulative_confirmed ) AS cases - FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` - WHERE - country_name ="India" - AND date between '2020-02-25' and '2020-03-10' - GROUP BY - date - ORDER BY - date ASC - ) -, india_previous_day_comparison AS -(SELECT - date, - cases, - LAG(cases) OVER(ORDER BY date) AS previous_day, - cases - LAG(cases) OVER(ORDER BY date) AS net_new_cases -FROM india_cases_by_date -) -select count(*) -from india_previous_day_comparison -where net_new_cases=0 -``` - -### Task 7. Doubling rate - -- Using the previous query as a template, write a query to find out the dates on which the confirmed cases increased by more than `Limit Value`% compared to the previous day (indicating doubling rate of ~ 7 days) in the US between the dates March 22, 2020 and April 20, 2020. The query needs to return the list of dates, the confirmed cases on that day, the confirmed cases the previous day, and the percentage increase in cases between the days. - - Use the following names for the returned fields: **Date**, **Confirmed_Cases_On_Day**, **Confirmed_Cases_Previous_Day**, and **Percentage_Increase_In_Cases**. - -Go to BigQuery and run the following query: - -Change the `Limit Value` based on the lab instructions. - -![Limit Value](./images/percentage.webp#center) - -Mine is `5`% so, I will change the value to `5`. - -```sql -WITH us_cases_by_date AS ( - SELECT - date, - SUM(cumulative_confirmed) AS cases - FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` - WHERE - country_name="United States of America" - AND date between '2020-03-22' and '2020-04-20' - GROUP BY - date - ORDER BY - date ASC - ) -, us_previous_day_comparison AS -(SELECT - date, - cases, - LAG(cases) OVER(ORDER BY date) AS previous_day, - cases - LAG(cases) OVER(ORDER BY date) AS net_new_cases, - (cases - LAG(cases) OVER(ORDER BY date))*100/LAG(cases) OVER(ORDER BY date) AS percentage_increase -FROM us_cases_by_date -) -select Date, cases as Confirmed_Cases_On_Day, previous_day as Confirmed_Cases_Previous_Day, percentage_increase as Percentage_Increase_In_Cases -from us_previous_day_comparison -where percentage_increase > <****change percentage value here****> -``` - -### Task 8. Recovery rate - -1. Build a query to list the recovery rates of countries arranged in descending order (limit to `Limit Value`) upto the date May 10, 2020. - -2. Restrict the query to only those countries having more than 50K confirmed cases. - - The query needs to return the following fields: `country`, `recovered_cases`, `confirmed_cases`, `recovery_rate`. - -Columns to reference: - -- country_name -- cumulative_confirmed -- cumulative_recovered - -Go to BigQuery and run the following query: - -Change the `limit` based on the lab instructions. - -![Limit](./images/limit.webp#center) - -Mine is `5` so, I will change the value to `5`. - -```sql -WITH cases_by_country AS ( - SELECT - country_name AS country, - sum(cumulative_confirmed) AS cases, - sum(cumulative_recovered) AS recovered_cases - FROM - bigquery-public-data.covid19_open_data.covid19_open_data - WHERE - date = '2020-05-10' - GROUP BY - country_name - ) -, recovered_rate AS -(SELECT - country, cases, recovered_cases, - (recovered_cases * 100)/cases AS recovery_rate -FROM cases_by_country -) -SELECT country, cases AS confirmed_cases, recovered_cases, recovery_rate -FROM recovered_rate -WHERE cases > 50000 -ORDER BY recovery_rate desc -LIMIT <****change limit here****> -``` - -### Task 9. CDGR - Cumulative daily growth rate - -- The following query is trying to calculate the CDGR on `Date` (Cumulative Daily Growth Rate) for France since the day the first case was reported.The first case was reported on Jan 24, 2020. -- The CDGR is calculated as: - `((last_day_cases/first_day_cases)^1/days_diff)-1)` - -Where : - -- `last_day_cases` is the number of confirmed cases on May 10, 2020 -- `first_day_cases` is the number of confirmed cases on Jan 24, 2020 -- `days_diff` is the number of days between Jan 24 - May 10, 2020 - -Go to BigQuery and run the following query: - -```sql -WITH - france_cases AS ( - SELECT - date, - SUM(cumulative_confirmed) AS total_cases - FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` - WHERE - country_name="France" - AND date IN ('2020-01-24', - <****change the date value here'2020-05-10'****>) - GROUP BY - date - ORDER BY - date) -, summary as ( -SELECT - total_cases AS first_day_cases, - LEAD(total_cases) OVER(ORDER BY date) AS last_day_cases, - DATE_DIFF(LEAD(date) OVER(ORDER BY date),date, day) AS days_diff -FROM - france_cases -LIMIT 1 -) -select first_day_cases, last_day_cases, days_diff, POW((last_day_cases/first_day_cases),(1/days_diff))-1 as cdgr -from summary -``` - -### Task 10. Create a Looker Studio report - -- Create a [Looker Studio](https://datastudio.google.com/) report that plots the following for the United States: - - Number of Confirmed Cases - - Number of Deaths - - Date range : `Date Range` - -Change the `Date Range` based on the lab instructions. - -![Date Range](./images/looker_date.webp#center) - -```sql -SELECT - date, SUM(cumulative_confirmed) AS country_cases, - SUM(cumulative_deceased) AS country_deaths -FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` -WHERE - date BETWEEN <****change the date value here'2020-03-19'****> - AND <****change the date value here'2020-04-22'****> - AND country_name ="United States of America" -GROUP BY date -``` - -Mine is `2020-03-19` to `2020-04-22`. It should look like this: - -```sql -SELECT - date, SUM(cumulative_confirmed) AS country_cases, - SUM(cumulative_deceased) AS country_deaths -FROM - `bigquery-public-data.covid19_open_data.covid19_open_data` -WHERE - date BETWEEN '2020-03-19' - AND '2020-04-22' - AND country_name ="United States of America" -GROUP BY date -``` - -## Congratulations! - -![Congratulations Badge](https://cdn.qwiklabs.com/GfiFidoAd%2BrgYQRFgZggxgzMWJsGgFxnfA6bOWScimw%3D#center) diff --git a/content/writeups/google-cloudskillsboost/_index.md b/content/writeups/google-cloudskillsboost/_index.md deleted file mode 100644 index 2351738..0000000 --- a/content/writeups/google-cloudskillsboost/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Google Cloud Skills Boost -summary: My solutions to the challenges in Google Cloud Skills Boost. -description: "Google Cloud Skills Boost is a training service that gives learners an on-demand, all-access pass to 700+ learning activities. Earn skill badges by taking courses, quests, and hands-on labs in topics such as data, AI, infrastructure, security, and more.<br>🔗 **https://www.cloudskillsboost.google/**<br><br>**Note**: Most writeups/walkthroughs from the platform are ***challenge labs***. If the lab is labeled _deprecated_, it means the lab has been updated and this solution will not work, but you can still use it to study." -hidemeta: true ---- diff --git a/hugo.yml b/hugo.yml index a6c47d5..4855e26 100644 --- a/hugo.yml +++ b/hugo.yml @@ -1,9 +1,9 @@ # Basic Information -baseURL: "https://hiiruki.dev/" +baseURL: "https://lemniskett.dev/" languageCode: en-us -title: "Hiiruki's Lab" +title: "Lemniskett's Stash" theme: Kamigo -copyright: '© 2023 [hiiruki.dev](https://hiiruki.dev) | [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) | [Privacy Policy](/privacy) | [Disclaimer](/disclaimer)' +copyright: '© 2023 [lemniskett.dev](https://lemniskett.dev) | [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) | [Privacy Policy](/privacy) | [Disclaimer](/disclaimer)' enableRobotsTXT: true buildDrafts: false @@ -21,7 +21,7 @@ minify: # Site parameters params: env: production - title: "Hiiruki's Lab" + title: "Lemniskett's Stash" description: "Thoughts and research on security, privacy, *nix based systems, and other IT stuff." keywords: [Blog, Research, Security, Privacy, Linux] DateFormat: "January 2, 2006" @@ -29,7 +29,6 @@ params: disableThemeToggle: false mainSections: - blog - - writeups author: Hiiruki ShowReadingTime: true @@ -58,42 +57,32 @@ params: # Home-info mode homeInfoParams: - title: "Hiiruki's Lab" + title: "Lemniskett's Stash" content: Thoughts and research on security, privacy, *nix based systems, and other IT stuff. # profile-mode profileMode: enabled: true # needs to be explicitly set - title: ひるき - subtitle: "_Astra inclinant, sed non obligant._" + title: Lemniskett + subtitle: "`./scripts/publish.sh ~/Personal\\ Notes/*`" imageUrl: "/images/profile.webp" imageWidth: 120 imageHeight: 120 - imageTitle: Hiiruki's Profile Picture + imageTitle: Lemniskett's Profile Picture buttons: - name: Blog url: blog - - name: Writeups - url: writeups # Social socialIcons: - - name: matrix - url: "https://matrix.to/#/@hiiruki:matrix.org" - - name: github - url: "https://github.com/hiiruki" - - name: mastodon - url: "https://infosec.exchange/@hiiruki" - - name: twitter - url: "https://x.com/0xHiiruki" - # - name: discord - # url: "https://discordapp.com/users/529270835341426708" - # - name: telegram - # url: "https://t.me/hiirvki" - # - name: xmpp => There is a bug in the URL, when using `xmpp:` as a protocol in the URL, it will be converted to http://xmpp:hiiruki@yourdataforsale which results in 404. - # url: "xmpp:hiiruki@yourdata.forsale" - name: email - url: "mailto:hi@hiiruki.dev" + url: "mailto:syahrial@lemniskett.dev" + - name: github + url: "https://github.com/lemniskett" + - name: pleroma + url: "https://lemniskett.space/users/lemniskett" + - name: telegram + url: "https://t.me/lemniskett" - name: pgp url: "/pgp.txt" - name: rss @@ -143,9 +132,6 @@ menu: - name: About url: /about/ weight: 4 - - name: Alt - url: https://hiiruki.com/ - weight: 5 # Taxonomies taxonomies: diff --git a/netlify.toml b/netlify.toml deleted file mode 100644 index 5bfb9ae..0000000 --- a/netlify.toml +++ /dev/null @@ -1,11 +0,0 @@ -[build] - publish = "public" - command = "hugo --gc --minify" - -[build.environment] - HUGO_VERSION = "0.118.2" - HUGO_ENV = "production" - HUGO_ENABLEGITINFO = "true" - -[context.deploy-preview] - command = "sed -i 's/! Content-Security-Policy//g' static/_headers && hugo --minify" diff --git a/static/.well-known/atproto-did b/static/.well-known/atproto-did deleted file mode 100644 index d9248b7..0000000 --- a/static/.well-known/atproto-did +++ /dev/null @@ -1 +0,0 @@ -did:plc:vagr5tvxln5hobl3aysklhtd \ No newline at end of file diff --git a/static/.well-known/hof.txt b/static/.well-known/hof.txt deleted file mode 100644 index d0f9ca8..0000000 --- a/static/.well-known/hof.txt +++ /dev/null @@ -1,22 +0,0 @@ -[!] Hall of Fame [!] - -Well, this is a static site but yeah no system is safe. -Just to implement the RFC 9116 (https://datatracker.ietf.org/doc/html/rfc9116) - ___________ - |.---------.| - || || - || || - || || - |'---------'| - `)__ ____(' - [=== -- o ]--. - __'---------'__ \ - [::::::::::: :::] ) - `""'"""""'""""`/T\ - \_/ - -Thanks to these researcher for finding bugs on my website: - -1. You? -2. -3. diff --git a/static/.well-known/security.txt b/static/.well-known/security.txt deleted file mode 100644 index b366d55..0000000 --- a/static/.well-known/security.txt +++ /dev/null @@ -1,35 +0,0 @@ -Security Contact Information - -Well, this is a static site but yeah no system is safe. -Just to implement the RFC 9116 (https://www.rfc-editor.org/rfc/rfc9116) - - .-""-. - / .--. \ - / / \ \ - | | | | - | |.-""-.| - ///`.::::.`\ - ||| ::/ \:: ; - ||; ::\__/:: ; - \\\ '::::' / - `=':-..-'` - -# Security Address -Contact: mailto:hi@hiiruki.dev -Contact: mailto:security@hiiruki.dev -Contact: mailto:hiiruki@pm.me - -# PGP/GPG Key -Encryption: https://hiiruki.dev/pgp.txt - -# Security Acknowledgments Page -Acknowledgments: https://hiiruki.dev/.well-known/hof.txt - -# Preferred Languages to Report a Vulnerability -Preferred-Languages: EN, ID - -# security.txt File Location -Canonical: https://hiiruki.dev/.well-known/security.txt - -security.txt - A proposed standard which allows websites to define security policies. -[https://securitytxt.org/] diff --git a/static/_redirects b/static/_redirects deleted file mode 100644 index d27c1c5..0000000 --- a/static/_redirects +++ /dev/null @@ -1,24 +0,0 @@ -# PGP & SSH Key -/pgp https://hiiruki.dev/pgp.txt -/ssh https://hiiruki.dev/ssh.txt - -# Security -/security /.well-known/security.txt -/hof /.well-known/hof.txt - -# Colophon -/humans /humans.txt - -# ¯\_(ツ)_/¯ -/admin http://aka.ms/confidential -/administrator http://aka.ms/confidential -/login http://aka.ms/confidential -/cpanel http://aka.ms/confidential -/secret http://aka.ms/confidential -/webadmin http://aka.ms/confidential -/adminarea http://aka.ms/confidential -/cp http://aka.ms/confidential -/controlpanel http://aka.ms/confidential - -# Contact -/contact /contact.txt diff --git a/static/android-chrome-192x192.png b/static/android-chrome-192x192.png index 1967080..692ea0a 100644 Binary files a/static/android-chrome-192x192.png and b/static/android-chrome-192x192.png differ diff --git a/static/android-chrome-512x512.png b/static/android-chrome-512x512.png index 6c73658..496ce96 100644 Binary files a/static/android-chrome-512x512.png and b/static/android-chrome-512x512.png differ diff --git a/static/apple-touch-icon.png b/static/apple-touch-icon.png index 03536ac..e0ee682 100644 Binary files a/static/apple-touch-icon.png and b/static/apple-touch-icon.png differ diff --git a/static/contact.txt b/static/contact.txt deleted file mode 100644 index 076ce2b..0000000 --- a/static/contact.txt +++ /dev/null @@ -1,41 +0,0 @@ -[matrix]: @hiiruki:matrix.org -[session]: ID: 055b210e9f97217abf1872ed98af29640d9f5194847352975a6e9a3ea301683602 -[xmpp]: hiiruki@0x1.re -[irc]: hiiruki @ Libera.Chat -[email]: hi@hiiruki.dev -[pgp]: https://hiiruki.dev/pgp.txt - ------BEGIN PGP MESSAGE----- - -hQIMA8LCHTNp15aBAQ/+LKbEyt/oSejUJ3ka+uFlirpH5AYDtOL9ReWz4be3Fukj -JkmBCNxrWngTZXkao2kRHMCZhHEX0n2FxavMUKmmfvY/IOgn2J7Jncs0ehabHXZ4 -yD2TJowzmHPAnx9Jv739wQ6RFj71lHUzz+L31VYfGekNnAz0/upsISZ8pfgbfXlD -c1KWxVSvq1y+R6oDPuh5k2EtZFXm7tgBDOTgOpSrL9kkWDJ2uIh9vuQr6u/HuG91 -QGWgkjfmpig6vM9bdTFJ8lAgPtZ2d9O4jUvqEVoFglljjRHdKMQErFW153XlzKmr -+T8FpsYN8nV3S1FI3o3zAgv3D+BbuGtR7Yp0pEa6rQncANNTMz8rZ7U5hX7LpOHw -SJgK1mDCEv1ND6AFDXhCPbx21Tr0YWKaYsqzQm/1YKHUXzJ4jL7yzVycbi+ox5lV -UiDkRg1IE8eOpA/wY7Uq4XfHy/nliMGXENMAHjfMpcP2jTA+Nrh4wcbh5lvqtD07 -rTKdy1kYCpv1dG6lDahrkCyrxPxxKZH04DfEPidvl5OekfL3U6StwR0TcFlWhcAt -fM9DN8JteF4g3HB1ZYdFEs9Ce7zNEQ0L2BhBr4mswiQZNd0GYGEghG5SV5PNTuUS -AOpgdUhXM9C5yT+2XBqLDuGvFpjBTCjvkOGGOhZe/EbStKcr+KqlkWmEzt1fOoTS -6QGHesFxOMG4rSurfsTBcIO/5g9zjJNgUlGNDMWjLi8NRJBAfTFPz40qbnHlwjgI -+5iRGC8b8Ex/imi7ri9Acl5SZPXwllxRMyIJwQZ5CY5XiYHOarHhkzqPpxpFAjVN -2DpDGkX61uIZ+XAkBFtT2oPzRTCas1dpJken4PMhBgZCJ6P9gA4mOZ49QiCjxNlN -rVUvF8xvm7LTjqcsGkgHxWawx5wCeJDiA5y80+9UqzEhg54J22wv03BXjdnNILl8 -ITjrSPkcAnym2gFtzIzAbKKQ36qfKSG4VcUinVR59LJZ4Iu9a/I1tlB/h3h9M8Oo -Po8A0002LVbNMxHTCjDdJLY74SHHIQQqLT2ECY+fb0nlLUhrCHAlXmQ8JmN1oKrf -t+Xrk633XcAZQW5xAIPZFAt/OodK8RP4hGV6n8EE4FBPzebylDyXSpU1p0qIncDj -e4w2AFmi2WOA6UzQ6wE3ttH4gYeuki7UrsNzbx883sjRs3PvITF9ckpRMtoSUysQ -ka3elz/N6g7aiAIxcoWXdC/7miqLxqNtFHvOEjWnfuc+dF/CI5TZHuTPv3Dg3/lO -iA9W1t011W362yVY6TW1bXbRAriYK6Ra7r8VfONWhmbGJ2DVIRcaugRgxn0Eq+3G -97Sga/I/up0Del81OfvA3Flt4NgSWTWNip9zOTner7ZXwLDU7morJtFFMzv9tcyq -jQsZrFGUym6y0zFoi+WptNnVRS6RMklcvut6ei8faIPCJmPZZS7vie7aTLecurzE -jNC/Zet1HJB3QoNudW9cj3KulItH1OWNK/z97xgoFIIQFe/FeEvlpnthFLB3HK7+ -i5p7/3IjNhyOB4ugQlFA7/UaNwis0arVbD4RUGchHaoEL/SRlcU40vKclndBE1am -KknBF4tXDMkBa/SdkFUumU4q0RDmjVz43qI4ALvsljVtAfZRQAA6wnn3bCjmlZN4 -5sp1dS2ngSXklRpN+Dn2+jLAnnI072EBYUtsLygc8x7nxjcx4ttJ04GKSxf9bKMG -pUNHnFM+G3SSZNptjywy2Xz4lqLPeZ52OM0H1NrHbk25z5T/nTqU55r3Cl1uzFuT -MsO3YofaahvYDvqmniWCIbevVIIR4n2JSSv51Zs846yD5DvltHPPAdEZEnEpfVnH -I7dJnM7lf/k98w5RgrbGd470eg== -=2XCx ------END PGP MESSAGE----- diff --git a/static/favicon-16x16.png b/static/favicon-16x16.png index 694b7be..af0e59b 100644 Binary files a/static/favicon-16x16.png and b/static/favicon-16x16.png differ diff --git a/static/favicon-32x32.png b/static/favicon-32x32.png index 0f4ed5d..a03b7c7 100644 Binary files a/static/favicon-32x32.png and b/static/favicon-32x32.png differ diff --git a/static/favicon.ico b/static/favicon.ico index b037617..4f6cae7 100644 Binary files a/static/favicon.ico and b/static/favicon.ico differ diff --git a/static/favicon.svg b/static/favicon.svg deleted file mode 100644 index c0906ae..0000000 --- a/static/favicon.svg +++ /dev/null @@ -1,8 +0,0 @@ -<?xml version="1.0" encoding="UTF-8"?> -<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> -<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="512px" height="512px" style="shape-rendering:geometricPrecision; text-rendering:geometricPrecision; image-rendering:optimizeQuality; fill-rule:evenodd; clip-rule:evenodd" xmlns:xlink="http://www.w3.org/1999/xlink"> -<g><path style="opacity:0.994" fill="#000000" d="M 48.5,1.5 C 186.834,1.33333 325.167,1.5 463.5,2C 490.024,5.85779 505.191,21.0245 509,47.5C 509.667,186.167 509.667,324.833 509,463.5C 505.191,489.976 490.024,505.142 463.5,509C 325.167,509.667 186.833,509.667 48.5,509C 23.8453,505.586 8.67861,491.753 3,467.5C 2.33333,326.167 2.33333,184.833 3,43.5C 8.91778,19.3189 24.0844,5.3189 48.5,1.5 Z"/></g> -<g><path style="opacity:1" fill="#fafafa" d="M 121.5,160.5 C 167.435,180.425 213.435,200.258 259.5,220C 260.814,227.775 260.814,235.608 259.5,243.5C 213.628,263.565 167.628,283.232 121.5,302.5C 121.187,295.245 121.52,288.079 122.5,281C 160.833,264.667 199.167,248.333 237.5,232C 199.406,215.022 161.073,198.522 122.5,182.5C 121.503,175.197 121.169,167.863 121.5,160.5 Z"/></g> -<g><path style="opacity:1" fill="#7d7d7d" d="M 400.5,348.5 C 400.5,344.167 400.5,339.833 400.5,335.5C 357.5,335.5 314.5,335.5 271.5,335.5C 271.5,339.833 271.5,344.167 271.5,348.5C 270.514,344.03 270.181,339.363 270.5,334.5C 314.167,334.5 357.833,334.5 401.5,334.5C 401.819,339.363 401.486,344.03 400.5,348.5 Z"/></g> -<g><path style="opacity:1" fill="#fdfdfd" d="M 400.5,348.5 C 357.5,348.5 314.5,348.5 271.5,348.5C 271.5,344.167 271.5,339.833 271.5,335.5C 314.5,335.5 357.5,335.5 400.5,335.5C 400.5,339.833 400.5,344.167 400.5,348.5 Z"/></g> -</svg> diff --git a/static/hiirukipub.asc b/static/hiirukipub.asc deleted file mode 100644 index 0d9a58f..0000000 --- a/static/hiirukipub.asc +++ /dev/null @@ -1,51 +0,0 @@ ------BEGIN PGP PUBLIC KEY BLOCK----- - -mQINBGMYkdUBEADC0X1ndeHfLwakyL7ZNZSijiZvwK7Cj0kRGZiDoRIN1xWAr2DE -r47QuLXTa4K1MK6uM95nbD8EtB/2F9CcxfK2LUTGKSQ2fUQcCVO+iyO/njaSqozN -K7E0YJRSgfutecBzLnd479K7g84P80/GuLE1sUC6zNuh0SXFJZYIAr3vKRmWNj4L -KNcbO3lF7QeIevS5nkC3r3MxcmQ4FVrFzs6oz4XduAomEipmzvzPwL0dQfAvJyl1 -y98WC37CQYvWf92UmwuqVKRlNZfVwMoN8UCHnyLADxludv5P9Dg4gOkKNTwnXeJT -JwHcHHMO/U88Tee283OMsdWPE3JIEXlgjQYk0HxU1W5+LB7XC/ROnArgwv41Mxf/ -rZNNVgkbcQZMczi7M+OBQSnUldTzAQC5KDAbKW5SYo882DJWUBqUmF3LgIqAQkES -jNnYu6+p1HsgwaSn8OYcRByRjaFi4bqJRYFReBOyGHCSL9fEZiP8lteXcwXaixgr -mldhRTRnof1xk7WIA8h9SSuL8aoGZba8UIfjqvCKGGeW4zx8Hm4qOH8v1ZqTk+mL -jPxSTNsdovSku2SjzKdjAaYeTGmVI6fFhYplwfQ1z9Zik0Ul66AmAuqXAd53kJ7q -aHDnLdAONtYgu2Wf+j0AlvBbnBfcItmz0XZ97b0mXUL8sHz09k1L0K2yJQARAQAB -tBhIaWlydWtpIDxoaUBoaWlydWtpLmRldj6JAk4EEwEKADgWIQSupbkn1/DUC/Sz -yfHkDXUhr1iGyAUCYxiR1QIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRDk -DXUhr1iGyMsHD/sH+12IcDrxkSr7KMarswnUfzeyafviCXuRkPqXQmt2Zm//7kyW -SGgymIehogjSVVwGwaa8wPArUprhvq90pbgJpKKhcqecOLMUKhwgCoOiVPLv9cg/ -Nw9ftTHkeVdf2oKe4pJ9hosCmnbXHbsTIWOhQdU42p41rh3n0UsacbHrEeqmZJ6x -X43DDS4c0cSp6qSfxkZjL7alFfRT6iwoch8EdMRkUdYxpAcoecVYYCNXT8E4es0I -i8KUNglAPhHYsKcCx/w2oHcYd1IgjnN0mv4ueVN1NYXWkjR+9RhiT541b1Ly3YXd -VVANm5f43bs+q+LrUaLl8g/4lBPy35Wg1KdukapkVbw/20NM0C068OH0nLSGQ/MA -TFbOvaEVE+3DmaAvCDNA30nIHubzMqYMRLeY60wq7L5MKgeadevLarplaM+QWup/ -Z4Ha+TzOhrx1YHoGb+ZvIXj15N3d+PfLkcavEwwhEqv6eACGzeZuB8PvTbUOPP7H -zxMk5NYn+/kW7z+Knz3aW3XIcNXObGZcWiJWon2n7vRZTA7VGtNVMPWfdLZeuu6c -ZucDTjkE9JJBvFRbGaMae5xw2+WiHyUkyJTUw4t1sxP3+M+JKW0f2r34lzDF/iBz -NljqxNXyZrOnnG050e7CC7CYaz36KEfW5+UD2DEPGB+3eVBC0SlBBK1IjLkCDQRj -GJHVARAAwlqzDSBEaHQ0jQhNQKX2JmuhXAXYb0RaSWSjOVid776CbKRhKkB4lByC -8yfcaWgQpuljtyEHNHqzw26CKrTnrXfGOeiYRzNmTHQgCwBXT7wLOZfgWkmmju+d -lcdcdf7FXX1cvrToUxHTWXYVdwCUQXSu098I12/plD8wqQjnXhaPyA6Fo4HjyGJ1 -VOyQmyeMs27yzoo+ZvTXJcqbi2jjHQKAPs/Jms7s6rFlO9X29+7nNwg7J6kduuK/ -NuQUBr5wDLolV+0YflSJMp9SjfZ9yF2v2gCyT8+BNGGkBAHHQ+q+JKFO7+tPwYmi -CoOt6XVtelfiqShQ9UA1DTB1EFvUVVFluvIbuPHuR7oPYaZ0FbYlPgl2pr57Xjb7 -60OHIjR2Vgei0m6Ou1ZUegTU879ZaR8prtBv2E5yehZOtNejlsCy6cbRYpxwt9yD -8c+OYYk1unKbBqMjRHVsAZ/X9HBAFBTSTfaoaVVj1WFBQDrKgodaRfEeIrpeUqYv -HGqAATKaWpQc9GmtzwNU+/mkQ5iVWML6uCpzLJaxccOZrkDV0BN3wwPVMVzM9HlH -V8l3XQ8xjxS29YlpL2BXaNU9ZswLVqEun2fnjWvljffNoK4CNgW2N9xqAZsCWB9D -IeDFaYd1/27ibmGX5zsGuW7hnZZM6J4HpgErpF3xufBkFwt90YcAEQEAAYkCNgQY -AQoAIBYhBK6luSfX8NQL9LPJ8eQNdSGvWIbIBQJjGJHVAhsMAAoJEOQNdSGvWIbI -KkwP/3diM9teaOvhakLlXC+jEiViiZLmyDkObJF06MNKzrXGaLuXwvwJmc0quivT -GMOyAMY0BjmnAvq0XAawt0UQZVgKwppIhmxu7v88j04vw85kyvjIUVtJQcILHCPf -oHvHiUG/zmL61mbYHSNoqL285IkHSEf6GVHvFnibvI0hIns+5sXtfvi5E8w88ixJ -ll9glTlxJsqI4YySyEe9bMF3wZ7OP68vKsCMjpuFf+hep92Y796aMGDYbaJYAjaF -OPue6Cy5v+W3LaEa7gGwFDH+cWMmIVzSYuftTOqfs2V08z9CzOVoPkr17WunCEO4 -etEM9bcgd45yijBSJ6zSWW8uzpTppY3DG3Spm4NYT9yWU5icPi0kJHheE97nx53c -YaXzajTQ6QITk+Rri/Qd2mno/ssdqZdUH0Jix3R8FWRUphi98aK5U3scM6wBgxpY -gtsudQxdmw4nB/tmnQPYymhlnLNtOAG1WVeMDQo7Egro3MZ0sMVuYLXLRVTd00g2 -qbgu861bU4GnlR6Q9Lq7eJ06EHD+/Lel0QGkXp+gPUJ19wtiQmcc6GJGz6HsHZyu -6LGno1nZECbHWrhLzgjoHuovStYcVStyUBoFoOP58RiqRsI3zs3XHZhGobYMw1id -N28TVrzr/JaFqxgnULHzxCLccELSjmfgshrPyHPrlpA2+e1H -=GBHe ------END PGP PUBLIC KEY BLOCK----- diff --git a/static/humans.txt b/static/humans.txt deleted file mode 100644 index 8ef6e54..0000000 --- a/static/humans.txt +++ /dev/null @@ -1,45 +0,0 @@ -/* Hey, you found this stuff (⊙_⊙') */ - -██╗ ██╗██╗██╗██████╗ ██╗ ██╗██╗ ██╗██╗ -██║ ██║██║██║██╔══██╗██║ ██║██║ ██╔╝██║ -███████║██║██║██████╔╝██║ ██║█████╔╝ ██║ -██╔══██║██║██║██╔══██╗██║ ██║██╔═██╗ ██║ -██║ ██║██║██║██║ ██║╚██████╔╝██║ ██╗██║ -╚═╝ ╚═╝╚═╝╚═╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ - https://hiiruki.dev - -/* TEAM */ - -Hiiruki -- Site : https://hiiruki.dev/ | https://hiiruki.com/ -- Contact : https://hiiruki.dev/about -- GitHub : https://github.com/hiiruki - -/* SITE */ - -Languages: -- English (EN) -- Indonesia (ID) - -Technologies: -- Standards : HTML, CSS, JavaScript, Markdown -- Frameworks : Hugo -- IDE : Visual Studio Code, Vim, Nano -- Hosting : Netlify, Vercel, DigitalOcean, GitHub -- OS : Arch Linux, Windows 11, Kali Linux, Ubuntu - -/* THANKS */ - -- Hugo [https://gohugo.io/] -- Netlify [https://www.netlify.com/] -- Vercel [https://vercel.com/] -- DigitalOcean [https://www.digitalocean.com/] -- GitHub [https://github.com/] -- MDN Web Docs by Mozilla [https://developer.mozilla.org/en-US/] -- WebDev by Google [https://web.dev/] -- w3schools [https://www.w3schools.com/] -- JSFiddle [https://jsfiddle.net/] -- Email Obfuscator [https://www.albionresearch.com/tools/obfuscator] - -The humans responsible & technology colophon -[http://humanstxt.org] diff --git a/static/images/profile.webp b/static/images/profile.webp index 6cad3bd..73feb27 100644 Binary files a/static/images/profile.webp and b/static/images/profile.webp differ diff --git a/static/pgp.txt b/static/pgp.txt index 8799b1f..44efb44 100644 --- a/static/pgp.txt +++ b/static/pgp.txt @@ -1,51 +1,14 @@ -----BEGIN PGP PUBLIC KEY BLOCK----- -mQINBGMYkdUBEADC0X1ndeHfLwakyL7ZNZSijiZvwK7Cj0kRGZiDoRIN1xWAr2DE -r47QuLXTa4K1MK6uM95nbD8EtB/2F9CcxfK2LUTGKSQ2fUQcCVO+iyO/njaSqozN -K7E0YJRSgfutecBzLnd479K7g84P80/GuLE1sUC6zNuh0SXFJZYIAr3vKRmWNj4L -KNcbO3lF7QeIevS5nkC3r3MxcmQ4FVrFzs6oz4XduAomEipmzvzPwL0dQfAvJyl1 -y98WC37CQYvWf92UmwuqVKRlNZfVwMoN8UCHnyLADxludv5P9Dg4gOkKNTwnXeJT -JwHcHHMO/U88Tee283OMsdWPE3JIEXlgjQYk0HxU1W5+LB7XC/ROnArgwv41Mxf/ -rZNNVgkbcQZMczi7M+OBQSnUldTzAQC5KDAbKW5SYo882DJWUBqUmF3LgIqAQkES -jNnYu6+p1HsgwaSn8OYcRByRjaFi4bqJRYFReBOyGHCSL9fEZiP8lteXcwXaixgr -mldhRTRnof1xk7WIA8h9SSuL8aoGZba8UIfjqvCKGGeW4zx8Hm4qOH8v1ZqTk+mL -jPxSTNsdovSku2SjzKdjAaYeTGmVI6fFhYplwfQ1z9Zik0Ul66AmAuqXAd53kJ7q -aHDnLdAONtYgu2Wf+j0AlvBbnBfcItmz0XZ97b0mXUL8sHz09k1L0K2yJQARAQAB -tBhIaWlydWtpIDxoaUBoaWlydWtpLmRldj6JAk4EEwEKADgWIQSupbkn1/DUC/Sz -yfHkDXUhr1iGyAUCYxiR1QIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRDk -DXUhr1iGyMsHD/sH+12IcDrxkSr7KMarswnUfzeyafviCXuRkPqXQmt2Zm//7kyW -SGgymIehogjSVVwGwaa8wPArUprhvq90pbgJpKKhcqecOLMUKhwgCoOiVPLv9cg/ -Nw9ftTHkeVdf2oKe4pJ9hosCmnbXHbsTIWOhQdU42p41rh3n0UsacbHrEeqmZJ6x -X43DDS4c0cSp6qSfxkZjL7alFfRT6iwoch8EdMRkUdYxpAcoecVYYCNXT8E4es0I -i8KUNglAPhHYsKcCx/w2oHcYd1IgjnN0mv4ueVN1NYXWkjR+9RhiT541b1Ly3YXd -VVANm5f43bs+q+LrUaLl8g/4lBPy35Wg1KdukapkVbw/20NM0C068OH0nLSGQ/MA -TFbOvaEVE+3DmaAvCDNA30nIHubzMqYMRLeY60wq7L5MKgeadevLarplaM+QWup/ -Z4Ha+TzOhrx1YHoGb+ZvIXj15N3d+PfLkcavEwwhEqv6eACGzeZuB8PvTbUOPP7H -zxMk5NYn+/kW7z+Knz3aW3XIcNXObGZcWiJWon2n7vRZTA7VGtNVMPWfdLZeuu6c -ZucDTjkE9JJBvFRbGaMae5xw2+WiHyUkyJTUw4t1sxP3+M+JKW0f2r34lzDF/iBz -NljqxNXyZrOnnG050e7CC7CYaz36KEfW5+UD2DEPGB+3eVBC0SlBBK1IjLkCDQRj -GJHVARAAwlqzDSBEaHQ0jQhNQKX2JmuhXAXYb0RaSWSjOVid776CbKRhKkB4lByC -8yfcaWgQpuljtyEHNHqzw26CKrTnrXfGOeiYRzNmTHQgCwBXT7wLOZfgWkmmju+d -lcdcdf7FXX1cvrToUxHTWXYVdwCUQXSu098I12/plD8wqQjnXhaPyA6Fo4HjyGJ1 -VOyQmyeMs27yzoo+ZvTXJcqbi2jjHQKAPs/Jms7s6rFlO9X29+7nNwg7J6kduuK/ -NuQUBr5wDLolV+0YflSJMp9SjfZ9yF2v2gCyT8+BNGGkBAHHQ+q+JKFO7+tPwYmi -CoOt6XVtelfiqShQ9UA1DTB1EFvUVVFluvIbuPHuR7oPYaZ0FbYlPgl2pr57Xjb7 -60OHIjR2Vgei0m6Ou1ZUegTU879ZaR8prtBv2E5yehZOtNejlsCy6cbRYpxwt9yD -8c+OYYk1unKbBqMjRHVsAZ/X9HBAFBTSTfaoaVVj1WFBQDrKgodaRfEeIrpeUqYv -HGqAATKaWpQc9GmtzwNU+/mkQ5iVWML6uCpzLJaxccOZrkDV0BN3wwPVMVzM9HlH -V8l3XQ8xjxS29YlpL2BXaNU9ZswLVqEun2fnjWvljffNoK4CNgW2N9xqAZsCWB9D -IeDFaYd1/27ibmGX5zsGuW7hnZZM6J4HpgErpF3xufBkFwt90YcAEQEAAYkCNgQY -AQoAIBYhBK6luSfX8NQL9LPJ8eQNdSGvWIbIBQJjGJHVAhsMAAoJEOQNdSGvWIbI -KkwP/3diM9teaOvhakLlXC+jEiViiZLmyDkObJF06MNKzrXGaLuXwvwJmc0quivT -GMOyAMY0BjmnAvq0XAawt0UQZVgKwppIhmxu7v88j04vw85kyvjIUVtJQcILHCPf -oHvHiUG/zmL61mbYHSNoqL285IkHSEf6GVHvFnibvI0hIns+5sXtfvi5E8w88ixJ -ll9glTlxJsqI4YySyEe9bMF3wZ7OP68vKsCMjpuFf+hep92Y796aMGDYbaJYAjaF -OPue6Cy5v+W3LaEa7gGwFDH+cWMmIVzSYuftTOqfs2V08z9CzOVoPkr17WunCEO4 -etEM9bcgd45yijBSJ6zSWW8uzpTppY3DG3Spm4NYT9yWU5icPi0kJHheE97nx53c -YaXzajTQ6QITk+Rri/Qd2mno/ssdqZdUH0Jix3R8FWRUphi98aK5U3scM6wBgxpY -gtsudQxdmw4nB/tmnQPYymhlnLNtOAG1WVeMDQo7Egro3MZ0sMVuYLXLRVTd00g2 -qbgu861bU4GnlR6Q9Lq7eJ06EHD+/Lel0QGkXp+gPUJ19wtiQmcc6GJGz6HsHZyu -6LGno1nZECbHWrhLzgjoHuovStYcVStyUBoFoOP58RiqRsI3zs3XHZhGobYMw1id -N28TVrzr/JaFqxgnULHzxCLccELSjmfgshrPyHPrlpA2+e1H -=GBHe ------END PGP PUBLIC KEY BLOCK----- +mDMEZQUA4hYJKwYBBAHaRw8BAQdA5jNsJ/mNAHJvRy7pQaMHTonqcv1UizcsWpVG +GXJdHMC0MFN5YWhyaWFsIEFnbmkgUHJhc2V0eWEgPHN5YWhyaWFsQGxlbW5pc2tl +dHQuZGV2PoiWBBMWCAA+FiEEgVCMXVBAFlBu1g1dQyX5nPAauEYFAmUFAOICGyMF +CSWYBgAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQQyX5nPAauEZ42QEAiF8Q +lgjhCxlwJpaOwh0SqXVnf6qdPIYDqH4U8koMmzwA/RJQ2vUKlvvTmQ/UxbOaMTg9 +u0vdkWTr6y4UY6YrnX4BuDgEZQUCMRIKKwYBBAGXVQEFAQEHQHJvWB8zFktqnJiv +chNPEotoslfyZSm/E+W4NZmGyDwyAwEIB4h+BBgWCAAmFiEEgVCMXVBAFlBu1g1d +QyX5nPAauEYFAmUFAjECGwwFCSWYBgAACgkQQyX5nPAauEbCdwEAk6a/tzTYMZZA +xgwSmQTSC27lq5C+mJ0VrfKiuG7dgQ4A/A/BmovgTdeLTwk6GqbJIZbVmgehzgZx +2nrs5uSs7EUA +=xj2a +-----END PGP PUBLIC KEY BLOCK----- \ No newline at end of file diff --git a/static/session.txt b/static/session.txt deleted file mode 100644 index 1b2f1cf..0000000 --- a/static/session.txt +++ /dev/null @@ -1 +0,0 @@ -Session ID: 055b210e9f97217abf1872ed98af29640d9f5194847352975a6e9a3ea301683602 \ No newline at end of file diff --git a/static/site.webmanifest b/static/site.webmanifest index 124bd99..45dc8a2 100644 --- a/static/site.webmanifest +++ b/static/site.webmanifest @@ -1 +1 @@ -{"name":"Hiiruki's lab","short_name":"hiiruki.dev","icons":[{"src":"/android-chrome-192x192.png","sizes":"192x192","type":"image/png"},{"src":"/android-chrome-512x512.png","sizes":"512x512","type":"image/png"}],"theme_color":"#ffffff","background_color":"#ffffff","display":"standalone"} \ No newline at end of file +{"name":"","short_name":"","icons":[{"src":"/android-chrome-192x192.png","sizes":"192x192","type":"image/png"},{"src":"/android-chrome-512x512.png","sizes":"512x512","type":"image/png"}],"theme_color":"#ffffff","background_color":"#ffffff","display":"standalone"} \ No newline at end of file diff --git a/static/ssh.txt b/static/ssh.txt deleted file mode 100644 index a58e472..0000000 --- a/static/ssh.txt +++ /dev/null @@ -1 +0,0 @@ -SHA256:uxJNkKzML7tBYwYdjzviimi/Nw4Nd8ghFpl2MOrYLnw hiiruki