Terraform Basics – AWS / GCP / Aliyun

What is Terraform?

It’s a tool to create, manage infrastructure as a code. Infrastructure includes not only servers but also network resources –e.g. DNS, loadbalancer. The benefit you can get is as follows:

  • Versioning of your changes
  • Management of all services as a whole (orchestration)
  • Single management of multi-cloud platform
  • and so on …

Let’s Try

I make two compute instances and make modifications, and finally delete all resources to demonstrate how to use Terraform.

  • on AWS (amazon web service), GCP (Google cloud platform) and Aliyun (Alibaba cloud)
  1. Install Terraform
  2. Get credentials
  3. Create servers
  4. Modify servers
  5. Delete all procured resources
Continue reading “Terraform Basics – AWS / GCP / Aliyun”

Fortigate Config Change Notification

Whenever changes are made in configuration, Fortigate posts notification at Slack channel.

Fortigate automation is composed of three elements:

  1. automation trigger … available trigger -HA Failover, Config change, Log, IOC, High CPU, Conserve mode
  2. automation action … available action -Email, IP Ban, AWS lambda, Webhook
  3. automation stitch … Combination of trigger and action
Continue reading “Fortigate Config Change Notification”

My reason to study kubernetes

Recently I’m shifting my workload to GCP, and on the course I’m studying for GCP. When I used GCP a few years back for application deployment, I just touched App Engine and datastore. At that time, the most of the function didn’t support Pyhotn3.x and it caused me to use AWS mainly. But now it supports Python3.x on GAE as well as on Function, and I thought it was a good time to try out GCP(and eventually I decided to shift).

GCP is quite complete, and it is neater than AWS in my opinion. I found it very interesting to use Kubernetes, simply it’s very easy while it’s really effective. Though it’s tightly integrated into the GCP, still I need to know kubernetes concept how it works. So, as usual I started my own test to try deploying kubernetes from the scratch, and it was actually quite difficult to understand.

In this category, I gathered all the resources I used to understand the kubernetes concept. While it’s not the main purpose, but as a guideline I used Kubernetes Certified Administrator(CKA) curriculum to measure the progress.

Python 100 project #22: Automated Excel translate with Multiple Translation Choice

This is a extended version from the previous project. This time, I created Microsoft Azure Text Translate version of translation module. And user can now select the translation either from Google or MS.

 

Output Example:

$ python3 excelTranslate.py data_source/questionsTest.xlsx 
opening workbook
reading rows...
translating ... アプリケーションのスタート方法がわからない
first candidate ... I do not know how to start the application
>>>Please select one from ['Y', 'N']: Y
translating ... 赤丸の挿入方法を教えて欲しい
first candidate ... Please tell me how to insert red circle
>>>Please select one from ['Y', 'N']: N
second candidate ... How do i insert a red circle?
>>>Please select one from ['Y', 'N']: Y
translating ... 文章の削除の方法はどうしたらいいか
first candidate ... How can I delete sentences?
>>>Please select one from ['Y', 'N']: N
second candidate ... How do I delete sentences?
>>>Please select one from ['Y', 'N']: N
Please select the option:
0 - keep the original sentence
1 - use the first candidate
2 - use the second candidate
>>>Please select one from [0, 1, 2]: 0
done...

 

Here is the code:

def get_translate(sentence, lang='en'):
    import http.client, json

    from data_source.ms_credentials import get_credential

    host = 'api.cognitive.microsofttranslator.com'
    path = '/translate?api-version=3.0'
    params = "&to=" + lang
    headers = get_credential()

    requestBody = [{
        'Text': sentence,
    }]
    content = json.dumps(requestBody, ensure_ascii=False).encode('utf-8')

    conn = http.client.HTTPSConnection(host)
    conn.request("POST", path + params, content, headers)
    response = conn.getresponse ()

    response_text = json.loads(response.read())[0]['translations'][0]['text']

    return response_text
import sys

import openpyxl

import google_clouds
import ms_azure

TARGET_COLUMN = 'C'

if len(sys.argv) != 2:
    print(f"Usage: {sys.argv[0]} 'original excel file'")
    sys.exit(0)

print('opening workbook')

workbook = sys.argv[1]

wb = openpyxl.load_workbook(workbook)
sheets = wb.sheetnames
target = wb[sheets[0]]
# target = wb.copy_worksheet(original)

def ask_selection(selection):
    while True:
        user_input = input(f'>>>Please select one from {selection}: ')
        try:
            user_input = int(user_input)
        except:
            pass
        if user_input in selection:
            return user_input

print('reading rows...')
for row in range(2, len(target[TARGET_COLUMN]) + 1):
    translations = []
    original_text = target[TARGET_COLUMN + str(row)].value
    translations.append(original_text)
    if original_text is not None and len(original_text) > 0:
        print(f'translating ... {original_text}')
        google_translation = google_clouds.get_translate(original_text)
        translations.append(google_translation)
        print(f"first candidate ... {google_translation}")

        if ask_selection(['Y', 'N']) == 'Y':
            selected_translation = 1
        else:
            ms_translation = ms_azure.get_translate(original_text)
            translations.append(ms_translation)
            print(f"second candidate ... {ms_translation}")

            if ask_selection(['Y', 'N']) == 'Y':
                selected_translation = 2
            else:
                print('Please select the option:\n'
                      '0 - keep the original sentence\n'
                      '1 - use the first candidate\n'
                      '2 - use the second candidate')
                selected_translation = ask_selection(list(range(len(translations))))

        target[TARGET_COLUMN + str(row)].value = translations[selected_translation]

wb.save('Translated_' + workbook.split('/')[-1])

print('done...')

 

Python 100 project #20: Google Cloud Translate

I’m working in Japanese company in UK, so naturally I use Japanese and English at work. It is often the case, that I just need to translate Japanese to English, or the opposite. And most of the times, it doesn’t require any technical background  (=just a plain translator). So I think it’d be great to have a function it automate those errands.

 

Output Example:

>>> import google_clouds
>>> s = '私は34歳の日本人です。仕事柄、日本語と英語を使いますが、ただの翻訳に時間を取られるのが嫌なので、自動化したいです。'
>>> 
>>> print(google_clouds.get_translate(s))
I am 34 years old Japanese. I use work patterns, Japanese and English, but I do not want to take time for just translation, so I would like to automate it.
>>>

There is just a minor mis-translation, but it is acceptable.

 

Here is the code:

from data_source.google_credentials import get_credential


def get_translate(sentence, lang='en'):
    # Imports the Google Cloud client library
    from google.cloud import translate

    # Instantiates a client
    translate_client = translate.Client(credentials=get_credential())

    # Translates some text into Russian
    translation = translate_client.translate(
        sentence,
        target_language=lang)

    return translation['translatedText']

 

Python 100 project #16: Generate sentiment plot

I used a few libraries in this project, to create a plot chart from the transcripts I retrieved in the previous project.

  • google-cloud-language … to retrieve the sentiment score from google cloud natural language
  • pandas … to create a dataframe (might not be necessary, but I will use this heavily later)
  • seaborn … to create plot chart.

 

Here is the output result. It is visible the Doctor become excited as it goes towards the end.

 

This code uses google-cloud-language library, and it returns the sentiment of the sentence it received.

def get_sentiment(content):
    from google.cloud import language
    
    client = language.LanguageServiceClient(credentials=credentials)
    
    document = language.types.Document(
        content=content,
        type='PLAIN_TEXT',
    )

    response = client.analyze_sentiment(
        document=document,
        encoding_type='UTF32',
    )
    
    sentiment = response.document_sentiment
    
    return sentiment.magnitude, sentiment.score

 

Using the sentence list I retrieved in the previous project, assuming “speeches” is the list of conversations, and “names” are literally the names who spoke that sentence in respective index.

This is to retrieve all the sentiments for all the speeches.

for speech in speeches:
    
    magnitude, score = get_sentiment(speech)
    
    magnitudes.append(magnitude)
    scores.append(score)

 

This is data wrangling to make the format easier to process. It actually doesn’t process data much, maybe in the later project I will dive into more statistics using these transcripts or Japanese statistics I used on project#5.

import pandas as pd

conversation_df = pd.DataFrame(
    {'name': names,
     'magnitude': magnitudes,
     'score': scores,
    })

conversation_df['index_label'] = conversation_df.index
conversation_df['magnitude_score'] = conversation_df.magnitude * conversation_df.score

 

This is the actual code to display the plot using the data retrieved.

import seaborn as sns

sns.lmplot(x="index_label", y="magnitude_score", data=conversation_df, hue="name", fit_reg=False, size=10, aspect=1.5)

 

 

Python 100 project #14: Google Cloud Natural Language API

This is more like a introduction of Google Cloud Natural Language API.

I’m trying to scrape the drama transcript from the web, and want to visualize them. In the past project, I’ve used wordcloud quite often, but it merely count the frequency of the word appeared in the sentence. it is of course very big factor to know the importance of that word. I’m going to use Cloud Natural Language API to compare those two result, and hopefully I can find the new things of my favourite dramas.

Usually, I use the third party library if there exists, and this google cloud natural language also has a python library called google-cloud-python. But this time I use simple requests to see how the raw transaction looks like.

 

[ analyzeEntities ]

import requests

MyAPIKEY = "your-api-key"

url = "https://language.googleapis.com/v1/documents:analyzeEntities?key={}"

says = "They're made of plastic. Living plastic creatures. They're being controlled by a relay device in the roof, which would be a great big problem if I didn't have this. So, I'm going to go up there and blow them up, and I might well die in the process, but don't worry about me. No, you go home. Go on. Go and have your lovely beans on toast. Don't tell anyone about this, because if you do, you'll get them killed."

params = {
    "document": {
        "type": "PLAIN_TEXT",
        "content": says,
    },
    "encodingType": "UTF8"
}

r = requests.post(url.format(MyAPIKEY), json=params)

r.json()
{'entities': [{'name': 'relay device',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.5245988,
   'mentions': [{'text': {'content': 'relay device', 'beginOffset': 81},
     'type': 'COMMON'},
    {'text': {'content': 'problem', 'beginOffset': 134}, 'type': 'COMMON'}]},
  {'name': 'plastic',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.2275309,
   'mentions': [{'text': {'content': 'plastic', 'beginOffset': 16},
     'type': 'COMMON'}]},
  {'name': 'creatures',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.11286028,
   'mentions': [{'text': {'content': 'creatures', 'beginOffset': 40},
     'type': 'COMMON'}]},
  {'name': 'roof',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.04330202,
   'mentions': [{'text': {'content': 'roof', 'beginOffset': 101},
     'type': 'COMMON'}]},
  {'name': 'process',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.027145086,
   'mentions': [{'text': {'content': 'process', 'beginOffset': 240},
     'type': 'COMMON'}]},
  {'name': 'toast',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.020346878,
   'mentions': [{'text': {'content': 'toast', 'beginOffset': 332},
     'type': 'COMMON'}]},
  {'name': 'beans',
   'type': 'OTHER',
   'metadata': {},
   'salience': 0.0200992,
   'mentions': [{'text': {'content': 'beans', 'beginOffset': 323},
     'type': 'COMMON'}]},
  {'name': 'anyone',
   'type': 'PERSON',
   'metadata': {},
   'salience': 0.015136869,
   'mentions': [{'text': {'content': 'anyone', 'beginOffset': 350},
     'type': 'COMMON'}]},
  {'name': 'home',
   'type': 'LOCATION',
   'metadata': {},
   'salience': 0.008979974,
   'mentions': [{'text': {'content': 'home', 'beginOffset': 286},
     'type': 'COMMON'}]}],
 'language': 'en'}

 

[ analyzeSentiment ]

url2 = 'https://language.googleapis.com/v1/documents:analyzeSentiment?key={}'

r2 = requests.post(url2.format(MyAPIKEY), json=params)

r2.json()
{'documentSentiment': {'magnitude': 3.1, 'score': 0},
 'language': 'en',
 'sentences': [{'text': {'content': "They're made of plastic.",
    'beginOffset': 0},
   'sentiment': {'magnitude': 0.1, 'score': -0.1}},
  {'text': {'content': 'Living plastic creatures.', 'beginOffset': 25},
   'sentiment': {'magnitude': 0.3, 'score': 0.3}},
  {'text': {'content': "They're being controlled by a relay device in the roof, which would be a great big problem if I didn't have this.",
    'beginOffset': 51},
   'sentiment': {'magnitude': 0.3, 'score': -0.3}},
  {'text': {'content': "So, I'm going to go up there and blow them up, and I might well die in the process, but don't worry about me.",
    'beginOffset': 165},
   'sentiment': {'magnitude': 0.5, 'score': 0.5}},
  {'text': {'content': 'No, you go home.', 'beginOffset': 275},
   'sentiment': {'magnitude': 0.1, 'score': -0.1}},
  {'text': {'content': 'Go on.', 'beginOffset': 292},
   'sentiment': {'magnitude': 0.1, 'score': 0.1}},
  {'text': {'content': 'Go and have your lovely beans on toast.',
    'beginOffset': 299},
   'sentiment': {'magnitude': 0.9, 'score': 0.9}},
  {'text': {'content': "Don't tell anyone about this, because if you do, you'll get them killed.",
    'beginOffset': 339},
   'sentiment': {'magnitude': 0.6, 'score': -0.6}}]}

 

It is interesting as the word ‘relay device’ has the salience value of 0.5245988, though the frequency is still the same as the other words. It should be very interesting if I gather all these result from whole Dr. Who episodes.