Ollama is a lightweight and user-friendly way to run LLMs locally. No need for complex setups and it makes it super easy to explore AI chat models from the comfort of your own device.
This tutorial is a small part of a broader project I’m working on, which involves using local LLMs and vision models to analyze data directly on-device. This approach helps reduce costs and addresses some of the privacy concerns raised by our customers.
Installation and Setup
Download and install Ollama from here.
Once the setup is complete, simply launch the Ollama application—it will open a ChatGPT-like interface that lets you interact with local LLMs.

This UI makes it very easy for searching, downloading and communicating with different LLMs. You can also chat with the models which are in the cloud without downloading them. Note that you require a Ollama account in order to communicate with a cloud model.
But we need to build a web based chat application and that means that we have to interact with Ollama API which is running at https://localhost:11434

Everyting seems to be set up properly. Let’s create a Python FastAPI endpoint which allows us to communicate with Ollama API. You can also use NodeJS, Go or .NET WebAPI to create a service endpoint.
Create a Python virtual environment and install the below dependencies.
pip install fastapi uvicorn requests httpx
The API uses a POST request and accepts three parameters: prompt, model, and stream.
prompt – The input message or query from the user.model – Specifies which model to run the prompt against. If not provided, it defaults to llama3.2:latest.stream – Optional setting that defaults to false. Set it to true if you want the response to appear in a typing animation, similar to ChatGPT.Note: enabling streaming requires additional changes to the code below. For below version of code, requests and httpx packages are not required.
from fastapi import FastAPI
from pydantic import BaseModel
import requests
app = FastAPI()
class PromptRequest(BaseModel):
prompt: str
model: str = "llama3.2:latest" # Default model, can be overridden in the request
@app.post("/generate")
async def generate_text(request: PromptRequest):
ollama_api_url = "http://localhost:11434/api/generate"
payload = {
"model": request.model,
"prompt": request.prompt,
"stream": False # True for streaming responses
}
try:
response = requests.post(ollama_api_url, json=payload)
response.raise_for_status() # Raise an exception for bad status codes
# Extract the generated text from Ollama's response
generated_text = response.json()["response"]
return {"response": generated_text}
except requests.exceptions.RequestException as e:
return {"error": f"Error communicating with Ollama: {e}"}
Run this API using uvicorn.
uvicorn main:app
The API server will start on default 8000 port. If you wish to change the port then start the API using the below command.
uvicorn main:app --port 8080
Let’s check the API response using Postman.

It’s quite helpful to see the response streamed in real time, just like how ChatGPT displays it. So let’s change the stream parameter to true and update our API code.
from fastapi import FastAPI
from fastapi.responses import StreamingResponse, HTMLResponse
from fastapi.staticfiles import StaticFiles
from pydantic import BaseModel
import httpx
import json
import os
app = FastAPI()
class PromptRequest(BaseModel):
prompt: str
model: str = "llama3.2:latest"
@app.post("/generate")
async def generate_text(request: PromptRequest):
ollama_api_url = "http://localhost:11434/api/generate"
payload = {
"model": request.model,
"prompt": request.prompt,
"stream": True
}
async def stream_text():
async with httpx.AsyncClient(timeout=None) as client:
async with client.stream("POST", ollama_api_url, json=payload) as response:
async for line in response.aiter_lines():
if line.strip():
try:
data = json.loads(line)
chunk = data.get("response", "")
if chunk:
yield chunk
except json.JSONDecodeError:
continue
return StreamingResponse(stream_text(), media_type="text/plain")
Now we have a streaming response, let’s make a UI, I am using Svelte. Start by creating a new project.
npm create vite@latest ollama-chat -- --template svelte-ts
Update the vite.config.ts file to include a custom proxy setting for the development server. This setup ensures that any requests made to /generate are forwarded to http://localhost:8000, allowing the frontend to communicate seamlessly with a backend API like FastAPI. It also helps prevent CORS-related issues during development.
export default defineConfig({
plugins: [svelte()],
server: {
proxy: {
'/generate': 'http://localhost:8000'
}
}
})
The response is formatted in Markdown, so to render it correctly, you’ll need an additional npm package called marked. You can install it using the command below.
npm install marked
Remember to change the port if your have setup the custom port for your API via uvicorn.
Replace the code in App.svelte with the below code.
Ollama Chat
{#if loading}Loading...
{:else if chatHtml}{@html chatHtml}{:else}No response yet.
{/if}Ollama Chat
{chat}
Start the UI using this command.
npm run dev
We are now all set to run our local LLM based chat agent. Let’s start by asking a question.

This code serves as a starting point. You can extend it by adding image or file upload functionality, allowing users to summarize content or ask questions based on the data within the uploaded document or image.
Here is the Github repo where you can find the entire code.

Today, my Amazon feed was flooded with mouse jiggler suggestions in various shapes, sizes, and features. A few days ago, during a chat with a friend, he mentioned wanting a device to keep his status active on Microsoft Teams while doing household chores. It was my first time hearing about such a gadget, and I found it fascinating to explore what it can do.
In a nutshell, mouse jiggler is a device which moves your mouse or simulate its movement to keep your computer active.
The cheapest mouse jiggler I can found on Amazon was around Rs. 880 or $11 (approx.). Now mouse and keyboard are Human Interface Device (HID) and this can be easily mimic with something like a cheap Raspberry PI Pico and the total cost of this will be around Rs. 330 or $4.00.
Grab a Raspberry PI Pico from Robu.in or ThingBits.in as these are the official reseller of Raspberry PIs in India.
Thonny is a Python IDE which has excellent support for Raspberry PI. I will be using this IDE so the steps are more clear to anyone who is working with a RPI for the first time.
After the installation is complete, plug the Pico to your computer while holding the BOOTSEL button on the PICO. This will put the PICO in the bootloader mode.
Click the bottom right corner of the main window, and select Configure interpreter.

Thonny options window will pop up where you will now click Install or update CircuitPython(UF2).


Click Install to start the installation and wait for it to finish. The device will restart after the installation is completed.
We need Adafruit’s HID library which you can download from here. Extract the contents of the zip file and copy adafruit_hid folder to the lib folder which is at the root of the Pico.
If you are using Thonny then open code.py file by pressing CTRL + O and paste in the following code.

NOTE: You will not see this dialog box if you have a wrong backend or no backend selected. You can change or select the right backend from the bottom right corner of of the Thonny IDE.
import usb_hid
from adafruit_hid.mouse import Mouse
from time import sleep
m = Mouse(usb_hid.devices)
while True:
m.move(-5, 0, 0)
sleep(0.5)
m.move(5, 0, 0)
sleep(0.5)
The line from adafruit_hid.mouse import Mouse imports the Mouse dependency, allowing us to control the mouse programmatically. The code is straightforward and can be tailored to your specific needs. In my case, I want to move the mouse slightly to keep my status active while I’m away. You can increase the time interval beyond 0.5 seconds, as both Teams and Slack take a while to detect inactivity before marking your status as inactive.
Currently, this Raspberry Pi Pico-based Mouse Jiggler is a fixture on my other always-on machine, saving me from having to re-login whenever I forget to move the mouse while deep in work.
I grew up playing all the 90s games and I still love them. For quite a sometime I am now using VirtuaNES emulator to do retro gaming on my Windows machine. There are other NES emulators out there but this is the one I have been using now for a while and it has been good so far.
To setup VirtuaNES emulator on Windows, download it from here and here. Once you download the zip file, extract the content to any folder and double click the VirtuaNES.exe to run the emulator.

After the emulator is launched, we can start configuring the sound and controller. If you are a keyboard person, then no configuration is needed, you can instantly load a ROM and start playing the game. The default keys are as follows (yours might look a little different):
Go to Options -> Controller to change the keyboard bindings or your controller bindings.

XBOX One controller is also compatible and you can configure it easily. Make sure to turn on or plug in the controller before you start the emulator.
Here is the screenshot of the bindings of my XBox One controller.

For configuring the sound settings, go to Options -> Sound

Even after setting up the sound, there is a chance that you can’t hear it when you play the game. That is due to a setting in the audio settings section in Windows. Refer the below screenshot and check whether the Mono audio is off or on. If it is off, then you have to turn it on and that will solve the sound problem in the emulator.

All set now!! Let’s get some games or ROMs as we call it and load it in the emulator. I downloaded few ROMs from Emulatorgames.net. Extract the zip file and load the ROM in the emulator by going to File -> Open. You should now see your childhood retro gaming console in front of you.

My team at Microsoft is responsible for building tools that enhance customer experience when you contact Microsoft support. Although our tools are never used by customers directly but they play an essential role in improving the experience and overall cost reductions. This means that our work is critical for the overall success and it should perform under variance of load, especially on holidays.
Team wants to test their application from different geographies and mark the metrics used.
I am not the primary team member leading the load testing project but it interests me a lot on how the other team is going to achieve this. Although they have a different plan to get this done, I came up with my own simple but useful way of getting these tests done via Azure Container Instances.
To accomplish this, I plan to use K6 load testing tool which is built specifically for engineering teams and by well-known Grafana Labs. I need to setup Azure Container Instances with K6 image and configure it with Azure File Share which acts as a volume mount.
I will start up by setting a Resource Group in Azure and I will name it cannon. You can name it anything you want.
I am using Azure CLI aka AZ CLI. If you have not setup AZ CLI for Azure, then I strongly suggest you do so. If you still choose not to use this, then you can still use Azure Portal, Terraform etc. to create these resources.
$ az group ceate --name cannon --location eastus
I need an Azure Storage account because File Share service is a part of Azure Storage.
The below command will create a new Azure Storage account inside the resource group cannon with Performance tier as Standard and Redundancy as Locally-redundant storage. You can change these values as per your need.
$ az storage account create --name geoloadteststore \ --resource-group cannon \ --location eastus \ --sku Standard_LRS
The below command will create a new file share which I will use as a volume mount for my Azure Container Instance.
az storage share create \ --account-name geoloadteststore \ --name loadtestresults \ --quota 1
In the above command, account-name and name are mandatory, quota is a non-mandatory parameter, but I have used it to ensure that the share size is not set to default which is 5TB. Setting quota as 1 will set the size of my share to be 1GB.
When I create a new Azure Container Instance, it will use an image to create a container and run it. This means that I don’t have to run the container explicitly unless there is an error.
Here is an AZ command that will create a new Azure Container Instance and execute the load test using K6.
az container create -n k6demo \ --resource-group cannon \ --location eastus --image grafana/k6:latest \ --azure-file-volume-account-name geoloadteststore \ --azure-file-volume-account-key RhGutivQKlz5llXx9gPxM/CP/dlXWLw5x6/SHyCl+GtLZeRp9cAYEByYTo3vL2EFAy0Nz0H+n1CV+AStTNGEmA== \ --azure-file-volume-share-name loadtestresults \ --azure-file-volume-mount-path /results \ --command-line "k6 run --duration 10s --vus 5 /results/tests/script_eastus.js --out json=/results/logs/test_results_eastus.json" \ --restart-policy Never
The above command has lots of details and few parts of it require some good attention. For most of the part, things are simple to understand. The image that I am using is provided to us by Grafana from their verified Docker Hub account. I then use the Azure File share information to setup the volume mount.
The important part here is the way the volume mount is used. The --azure-file-volume-mount-path has the value /results which will be a mount path in the container. This means that you don’t have to create a folder named results in the file share. If you take your attention to the next parameter --command-line, you can see that the K6 test script is being read from /result/tests folder and output of the command is stored in /results/logs. If you wish, you can also use a full path of a blob storage or even an S3 bucket to read your script file.
Navigate to the file share in Azure Portal and create 2 folders named logs and tests.

Inside the tests folder, add a K6 script file you have created. I have created multiple script files for different locations.

For this example, I am using a demo script with some changes to it. Note the change in the name of the summary file. It contains the location name for easy identification of logs.
import http from "k6/http";
import { sleep } from "k6";
export default function () {
http.get("https://test.k6.io");
sleep(1);
}
export function handleSummary(data) {
return {
"/results/logs/summary_eastus.json": JSON.stringify(data), //the default data object
};
}
As per K6 documentation, I have added an additional function named handleSummary. This function will generate the summary for the entire test and save it in the JSON format which later I can use for visualization. This is the same output you will see when you run the K6 on your console. The other file which is referenced in the command above called test_results_eastus.json will have details about every test run.
I execute the az container create command three times for 3 different regions.

I have 3 Azure Container Instances which ran by K6 load test for 3 different locations. You can also see the location for each Azure Container Instance in the above screenshot. After the load test is finished, I can now view my JSON logs in the file share.

This way I can run the load test on my web apps or services hosted from anywhere or at least from all the Azure regions.
With the above working as expected, I think there are few things that I can really improve on.
The entire work is manual and is error prone. It will be good to automate this entire process with less user intervention when it comes to execution. Writing load tests would be user responsibility though. Going forward, I would like to automate the creation of all these resources using Terraform and then execute the TF scripts automatically. If you want to do this for any of your Terraforms projects, refer my article on medium.
After getting the test logs from the File Share, I also want to see what the load test results look like. As of now there is no tool I have to do so. But there are few I found on the GitHub that allow me to visualize the output of K6 load tests. Maybe I would write one or use an open-source, I am not sure about that at the moment.
I hope you enjoyed this article and learn one or more things related to Azure and K6. Here are few more resources that will be helpful.
Terraform is my go-to IAC tool for building my infrastructure in Azure. I usually use Azure DevOps pipeline to execute my terraform plan, but it would be nice to know if I can execute it programmatically or on request basis. Even on request basis you can trigger a CI/CD pipeline to provision the infrastructure but maybe it is too much for a simple project to have.
One of the biggest pain points has been the authentication in command line tooling. I can execute my terraform plan if I have az cli login done on the shell/terminal I am using. I can also perform the same operation programmatically, but it will still open up a web browser and ask me for authentication. I don’t want user intervention when performing this operation. So, the way I can achieve this is by using Service Principal.
You can also make use of Managed Service Identity or MSI but not all Azure resources support this. You can check the list of the resources here.
The service principal I am planning to use will let me create any Azure resource. This is equivalent to az login cli command.
Add a new App registration in Azure Active Directory. Give a name to your application and then select Redirect URI to be web and URL can be left blank. Click Register to create an application.

In the Overview section, copy the client id and tenant id. You also need to have a subscription id which you can find in your Active Directory or in your subscription resource in the portal.

Click on Certificates & secrets, and then click + New client secret. Follow the instructions to create a new secret and once done, you should be presented with a secret which you should copy and save somewhere safe (preferably in Azure KeyVault) as this is the only time it will be visible to you.

In the end you should have these values with you:
Now if you try creating a new resource using Terraform, it will fail as the service principal does not have permissions to manage resources in your subscription. To grant permissions, go to Subscriptions in Azure portal and click Access control (IAM).

Click on Add role assignment and then click Privileged administrator roles.

You can ignore the warning shown at the bottom, we need this option for adding contributor access to the subscription we want to manage.

Select Contributor from the list and click Next.

Select User, group or service principal and click + Select members.

Search your application by name, select it and then click Select. Verify the details in the last step and click Review + assign.

Back to the Access controls (IAM) blade, you can see the role assignment to the subscription.

Let’s see with a very basic example of getting this done programmatically. Setup a new go project and import these packages.
import (
"fmt"
"log"
"os"
"github.com/hashicorp/go-version"
"github.com/hashicorp/hc-install/product"
"github.com/hashicorp/hc-install/releases"
)
In the main function, add the below code:
os.Setenv("ARM_CLIENT_ID", "")
os.Setenv("ARM_CLIENT_SECRET", "")
os.Setenv("ARM_TENANT_ID", "")
os.Setenv("ARM_SUBSCRIPTION_ID", "")
//az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
cmd := exec.Command("az", "login", "--service-principal", "-u", os.Getenv("ARM_CLIENT_ID"), "-p", os.Getenv("ARM_CLIENT_SECRET"), "--tenant", os.Getenv("ARM_TENANT_ID"))
var stdoutBuf, stderrBuf bytes.Buffer
cmd.Stdout = io.MultiWriter(os.Stdout, &stdoutBuf)
cmd.Stderr = io.MultiWriter(os.Stderr, &stderrBuf)
err := cmd.Run()
if err != nil {
log.Fatalf("cmd.Run() failed with %s\n", err)
}
outStr := string(stdoutBuf.Bytes())
fmt.Println(outStr)
The first thing we did was to set environment variables.
Name the environment variables as shown in the above example. These are the same names which are internally used by Terraform.
Normally I would use a service principal like this:
$ az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
As we are automating this process, we can use the exec.Command to execute this command with parameters like this:
cmd := exec.Command("az", "login", "--service-principal", "-u", os.Getenv("ARM_CLIENT_ID"), "-p", os.Getenv("ARM_CLIENT_SECRET"), "--tenant", os.Getenv("ARM_TENANT_ID"))
This will get the service principal and assign it to the terminal where this application will be running.

Moving ahead you can remove or comment out the above code and leave the environment variables as is in the code file.
As a next step you can also take the terraform binary from the environment variable and automate the execution just like above. But there is an efficient way of doing this and for that we must make slight changes to our code.
First, we need to check if terraform is installed on the machine or not. On my machine I have Terraform installed and added to the environment with the name terraform. In Go, I can get this path with the help of os.Getenv and pass in the name of the environment variable terraform.
If the path exists, then I will use that path and if not then I can install a specific version of Terraform. Here is the is the complete code for the above explanation:
package main
import (
"context"
"log"
"os"
"github.com/hashicorp/go-version"
"github.com/hashicorp/hc-install/product"
"github.com/hashicorp/hc-install/releases"
)
func main() {
var execPath string
var tfInstallDir string
var err error
tfBin := os.Getenv("terraform")
if len(tfBin) > 0 {
log.Printf("Found Terraform: %s", tfBin)
execPath = filepath.Join(tfBin, "terraform.exe")
} else {
log.Print("Terraform not found....installing")
installer := &releases.ExactVersion{
Product: product.Terraform,
Version: version.Must(version.NewVersion("1.4.6")),
}
wd, _ := os.Getwd()
tfInstallDir = filepath.Join(wd, "tf")
if _, err := os.Stat(tfInstallDir); err != nil {
log.Printf("Installation directory not found...creating")
if err = os.MkdirAll(tfInstallDir, os.ModePerm); err != nil {
log.Fatalf("ERROR: Cannot create \"%s\" directory - %v", tfInstallDir, err.Error())
panic(err)
}
installer.InstallDir = tfInstallDir
log.Printf("Installing version: %s", installer.Version.String())
execPath, err = installer.Install(context.Background())
if err != nil {
log.Fatalf("Error installing Terraform: %s", err)
}
execPath = filepath.Join(installer.InstallDir, "terraform.exe")
log.Printf("Installed Terraform %s at %s", installer.Version.String(), execPath)
} else {
execPath = filepath.Join(tfInstallDir, "terraform.exe")
log.Printf("Terraform %s found at %s", installer.Version.String(), execPath)
}
}
}
The above program first looks for the terraform environment variable and tries to get the value for it. If the value exists, execPath variable will hold its value. If not meaning that Terraform is not installed on this machine and requires installation. The two packages that will help us installing the right version of Terraform are:
We first prepare the installer by providing the details of the product we want to install, in our case, it is Terraform. You can provide a specific version based on your requirements. If you want to install any specific version like 1.0.6 etc. You can provide the version number and it will be installed.
The installer.Install function will take in the context which will run in the background and perform the installation for us. Once the installation is completed, you can see the path of the Terraform binary.
Note that if I have not provided an installation path or a directory, the installation will be done in a temp location of your machine. If you don’t want the installation to be done in a temporary location and also want to speed up the execution, then set the InstallDir property to set the path for installation.
Check the below code for
InstallDirimplementation.
Next, we set up the working directory where our Terraform code is. We need to import a new package called tfexec:
"github.com/hashicorp/terraform-exec/tfexec"
and the code:
workingDir := "iac"
tf, err := tfexec.NewTerraform(workingDir, execPath)
if err != nil {
log.Fatalf("Error running NewTerraform: %s", err)
}
The NewTerrafrom function takes in two parameters. First is the working directory where you have kept your .tf files and the second one is the execPath, which is the executable path of the Terraform binary.
After this we can perform terraform init and apply like this:
log.Print("Start executing TF Init")
err = tf.Init(context.Background(), tfexec.Upgrade(true))
if err != nil {
log.Fatalf("Error running Init: %s", err)
}
log.Print("Finished running TF Init")
log.Print("Start running TF Apply")
err = tf.Apply(context.Background())
if err != nil {
log.Fatalf("Error running Apply: %s", err)
}
log.Print("Finished running TF Apply")
Both init and apply code are simple to understand. The last one is the show command. If you have worked with terraform cli, you also want to show the output after terraform apply has been successful. The output variables defined in your .tf files will return values like IP address of the virtual machine or the DNS name which you can save or use somewhere else.
These are the contents of my output.tf file:
output "public_ip_address" {
value = azurerm_linux_virtual_machine.popcorndbvm.public_ip_address
}
output "tls_private_key" {
value = tls_private_key.popcornssh.private_key_openssh
sensitive = true
}
We can also check if the output is marked as sensitive or not. You can see here that I have marked tls_private_key as sensitive. When you traverse the output variables, you can check the Sensitive property and prevent the value to be displayed in your terminal. Below is the code that does the same thing:
state, err := tf.Show(context.Background())
if err != nil {
log.Fatalf("Error running Show: %s", err)
}
for s, i := range state.Values.Outputs {
val := i.Value
if s == "tls_private_key" && i.Sensitive {
data := val.(string)
err := ioutil.WriteFile("propcornvm_key.key", []byte(data), 0)
if err != nil {
log.Fatalf("Cannot save private key to the local machine. - %s", err.Error())
} else {
fmt.Printf("Private Key saved: %s\n", "propcornvm_key.key")
}
} else {
fmt.Printf("%s : %s", s, val)
fmt.Println()
}
}
The state variable is a pointer to *tfjson.State and once it runs successfully the output will be stored in a map[string]*tfjson.StateOutput, which we can iterate over to get the values of the output variables.
NOTE: You can use my terraform files to create a web app, app service plan, Linux virtual machine etc. You can view these files here.
Here is the complete code. You need to update the environment variables and replace them with the variables you have obtained from Azure portal. Set workingDir variable with the name of the path where your tf files are.
package main
import (
"context"
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
"github.com/hashicorp/go-version"
"github.com/hashicorp/hc-install/product"
"github.com/hashicorp/hc-install/releases"
"github.com/hashicorp/terraform-exec/tfexec"
)
func main() {
// Update these environment variables with yours.
os.Setenv("ARM_CLIENT_ID", "")
os.Setenv("ARM_CLIENT_SECRET", "")
os.Setenv("ARM_TENANT_ID", "")
os.Setenv("ARM_SUBSCRIPTION_ID", "")
//az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
// cmd := exec.Command("az", "login", "--service-principal", "-u", os.Getenv("ARM_CLIENT_ID"), "-p", os.Getenv("ARM_CLIENT_SECRET"), "--tenant", os.Getenv("ARM_TENANT_ID"))
// var stdoutBuf, stderrBuf bytes.Buffer
// cmd.Stdout = io.MultiWriter(os.Stdout, &stdoutBuf)
// cmd.Stderr = io.MultiWriter(os.Stderr, &stderrBuf)
// err := cmd.Run()
// if err != nil {
// log.Fatalf("cmd.Run() failed with %s\n", err)
// }
// outStr := string(stdoutBuf.Bytes())
// fmt.Println(outStr)
var execPath string
var tfInstallDir string
var err error
tfBin := os.Getenv("terraform1")
if len(tfBin) > 0 {
log.Printf("Found Terraform: %s", tfBin)
execPath = filepath.Join(tfBin, "terraform.exe")
} else {
log.Print("Terraform not found....installing")
installer := &releases.ExactVersion{
Product: product.Terraform,
Version: version.Must(version.NewVersion("1.4.6")),
}
wd, _ := os.Getwd()
tfInstallDir = filepath.Join(wd, "tf")
if _, err := os.Stat(tfInstallDir); err != nil {
log.Printf("Installation directory not found...creating")
if err = os.MkdirAll(tfInstallDir, os.ModePerm); err != nil {
log.Fatalf("ERROR: Cannot create \"%s\" directory - %v", tfInstallDir, err.Error())
panic(err)
}
installer.InstallDir = tfInstallDir
log.Printf("Installing version: %s", installer.Version.String())
execPath, err = installer.Install(context.Background())
if err != nil {
log.Fatalf("Error installing Terraform: %s", err)
}
execPath = filepath.Join(installer.InstallDir, "terraform.exe")
log.Printf("Installed Terraform %s at %s", installer.Version.String(), execPath)
} else {
execPath = filepath.Join(tfInstallDir, "terraform.exe")
log.Printf("Terraform %s found at %s", installer.Version.String(), execPath)
}
}
workingDir := "iac"
tf, err := tfexec.NewTerraform(workingDir, execPath)
if err != nil {
log.Fatalf("Error running NewTerraform: %s", err)
}
log.Print("Start executing TF Init")
err = tf.Init(context.Background(), tfexec.Upgrade(true))
if err != nil {
log.Fatalf("Error running Init: %s", err)
}
log.Print("Finished running TF Init")
log.Print("Start running TF Apply")
err = tf.Apply(context.Background())
if err != nil {
log.Fatalf("Error running Apply: %s", err)
}
log.Print("Finished running TF Apply")
state, err := tf.Show(context.Background())
if err != nil {
log.Fatalf("Error running Show: %s", err)
}
for s, i := range state.Values.Outputs {
val := i.Value
if s == "tls_private_key" && i.Sensitive {
data := val.(string)
err := ioutil.WriteFile("propcornvm_key.key", []byte(data), 0)
if err != nil {
log.Fatalf("Cannot save private key to the local machine. - %s", err.Error())
} else {
fmt.Printf("Private Key saved: %s\n", "propcornvm_key.key")
}
} else {
fmt.Printf("%s : %s", s, val)
fmt.Println()
}
}
}
terraform-exec module is used to construct the terraform commands. Take a look at its repository.
Before you plan to use this module in your production environment, consider the below excerpt from the repository readme file:
While terraform-exec is already widely used, please note that this module is not yet at v1.0.0, and that therefore breaking changes may occur in minor releases.
Here is the output of the above example, when I run it with my Azure service principal.

You can see 1 output variable public_ip_address and because we have marked the other output variable as sensitive, it is not shown here in the terminal, instead its output is stored in a file named popcornvm_key.key.
We see all our resources are successfully created in Azure portal.
