currently I'm using custom script extension to run scripts on demand on my azure vm server as part of our software solution, our other dev team is moving an application to a scale set and i am no longer capable of deploying custom script extension on demand to the scale set instances. the only solution i have found for running custom script extension on scale set instances is to reconfigure the deployment template with it, this method is not good for me as the scripts should be run on demand and are changed frequently and updating the template every time is bad practice.
Is there anyway to configure custom script extension on scale set instances on demand like on regular virtual machines?
powershell example for regular on demand script deployment on vm:
Set-AzureRmVMCustomScriptExtension -ResourceGroupName myResourceGroup `
-VMName myVM `
-Location myLocation `
-FileUri myURL `
-Run 'myScript.ps1' `
-Name DemoScriptExtension
I found a workaround for this using PowerShell and ARM JSON templates (I'm using Powershell version 5.1). In commandToExecute under virtualMachineProfile in your json template, specify a value that almost always changes and it will force the command to re-execute to run every time your template is deployed. You will see in my template that I added: ' -Date ', deployment().name to the commandToExecute. The value for deployment().name is specified in my New-AzureRmResourceGroupDeployment command as:
-Name $($(Get-Date -format "MM_dd_yyyy_HH_mm"))
The deployment name is based on the date and time, which will be different per minute.
PowerShell Command:
New-AzureRmResourceGroupDeployment -ResourceGroupName $ResourceGroupName -TemplateFile $PathToJsonTemplate -TemplateParameterFile $PathToParametersFile -Debug -Name $($(Get-Date -format "MM_dd_yyyy_HH_mm")) -force
The custom script extension section under virtualMachineProfile in my script appears as such (pay attention to the commandToExecute):
"virtualMachineProfile": {
"extensionProfile": {
"extensions": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "MyExtensionName",
"location": "[parameters('location')]",
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"[concat(parameters('customScriptExtensionSettings').storageAccountUri, '/scripts/MyScript.ps1')]"
],
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File MyScript.ps1', ' -Date ', deployment().name)]"
},
"protectedSettings": {
"storageAccountName": "[parameters('customScriptExtensionSettings').storageAccountName]",
"storageAccountKey": "[listKeys(variables('accountid'),'2015-05-01-preview').key1]"
}
}
},
This will allow you to update a Custom Script Extension on a Virtual Machine Scale Set that has already been deployed. I hope this helps!
Is there anyway to configure custom script extension on scale set
instances on demand like on regular virtual machines?
For now, Azure does not support this.
We only can use VMSS custom script to install software at the time the scale set is provisioned.
More information about VMSS extension, please refer to this link.
Related
How should I deploy a molecular microservice project on the server without using docker and Kubernetes?
I pull my updated code into a server and run the npm run dev command project run as per aspected.
But now I want to set up pm2 for this project so, what do I need to do?
I try to run npm run start command on a server but I am getting below output and the project is not running.
Please help.
Your problem is that you didn't configure the services to start. For Docker and Kubernetes environment, we use SERVICES env variable to configure what services should be loaded on a Moleculer node.
So you can also use this way, or modify the start script in package.json and set the services what you want to load. E.g.
moleculer-runner -e services/**/*.service.js
I got the solution.
If we not use the Docker and Kubernetes for moleclar project and directly clone our code on server same as like normal NodeJS(Express) project.
Then we need to create index.js file and need to put below lines.
const { ServiceBroker } = require('moleculer');
const config = require('./moleculer.config');
config.hotReload = true;
const broker = new ServiceBroker(config);
broker.loadServices('services', '**/*.service.js');
broker.start();
So, using above command Molecular started all the service of our project
After, index file we can able to start our project using pm2 service.
I created a start script (api-init.json) as below:
[
{
"script": "./node_modules/moleculer/bin/moleculer-runner.js",
"args": "services",
"instances": "max",
"watch": false,
"name": "API"
}
]
Then use pm2 to start:
pm2 start api-init.json
Is there a way to setup bootstrap actions to run on EMR after core services are installed (Spark etc)? I am using emr-5.27.0.
You can submit some script as a step, not a bootstrap. For example, I made an SSL certificate update script and it is applied to the EMR by a step. This is a part of my lambda function in Python language. But you can add this step by manually on the console, or other languages.
Steps=[{
'Name': 'PrestoCertificate',
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 's3://ap-northeast-2.elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': ['s3://myS3/PrestoSteps_InstallCertificate.sh']
}
}]
The key point is script-runner.jar that is pre-built by amazon and you can use that for each region by changing the region prefix. It receives a .sh file and runs it.
One thing you should know is, the script will run on all the nodes and if you want to do it only the master instance then you have to use if-else statement.
#!/bin/bash
BOOL=`cat /emr/instance-controller/lib/info/instance.json | jq .isMaster`
if [ $BOOL == "true" ]
then
<your code>
fi
Developing full-stack web apps, I would like to have all my code and build logic on a linux machine (i.e. git, docker containers and other terminal commands), but all my development workflow on my windows machine (so my IDE, web browser and REST client) accessing it via SSH.
I've managed to do all of that except for the IDE, I could only edit individual files via SSH instead of managing a folder as a project. So right now I use VSCode on the linux machine (Ubuntu), and it's the last thing preventing me for dropping the graphical interface and install Ubuntu Server on it.
And no, I don't want to use Vim or Emacs. I want to use VSCode, or another modern IDE, but preferably VSCode.
Try using the Remote VSCode plugin as explained here: Using Remote VSCode
This discussion is exactly about your problem: VSCode 13643 issue Github
EDIT: I have recently found a new VSCode plugin on Github: vs-deploy. It was designed to deploy files and folders remotely very quickly. It seems to be working and I haven't found any bugs so far. It works with FTP, SFTP (SSH) and many other protocols.
The SSH.NET nuget Package can be used quite nicly to copy files and folders.
Here is an example:
var host = "YourServerIpAddress";
var port = 22;
var user = "root"; // TODO: fix
var yourPathToAPrivateKeyFile = #"C:\Users\Bob\mykey"; // Use certificate for login
var authMethod = new PrivateKeyAuthenticationMethod(user, new PrivateKeyFile(yourPathToAPrivateKeyFile));
var connectionInfo = new ConnectionInfo(host, port, user, authMethod);
using (var client = new SftpClient(connectionInfo))
{
client.Connect();
if (client.IsConnected)
{
//TODO: Copy folders recursivly etc.
DirectoryInfo source = new DirectoryInfo(#"C:\your\probject\publish\path");
foreach (var file in source.GetFiles())
{
client.UploadFile(File.OpenRead(file.FullName), $"/home/yourUploadPath/{file.Name}", true);
}
}
}
When you create a upload console application using the code above your should be able to automatically trigger an upload by using postbuild events by adding a section to your Project.
<Target Name="PostBuild" AfterTargets="PostBuildEvent">
<Exec Command="path to execute your script or application" />
</Target>
If you prefer to do the same but more manual you can perform a
dotnet build --configuration Release
followed by a
dotnet publish ~/projects/app1/app1.csproj
and then use the code above to perform an upload.
Search for the extension SSHExtension developed by Vitaly Kondratiev
Install the extension.
And edit the serverlist json with the server details.
ex:
"sshextension.serverList": [
{
"name": "Kuberntes 212",
"host": "10.64.234.54",
"port": 22,
"username": "root",
"password": "byebye"
}
]
save the file
Then log in using ctrl+shift+p and choose sshextension open ssh extension category. It shall create a session for you.
More Easily if u need the entire directory structure into ur local workspace.
Use extension ftp-simple in vscode. It works as a wonder, trust me.
Install ftp-simple in vscode
ctrl+shift+p
select ftp-simple:config.
Configure your settings
[
{
"name": "Kubernetes 212",
"host": "10.75.64.2",
"port": 22,
"type": "sftp",
"username": "root",
"password": "byebye",
"path": "/home/vinod/",
"autosave": true,
"confirm": true
}]
save the file.
Now ctrl+shift+p
and select ftp-simple:remote directory to workspace.
Viola Your work is done, Life is simple
On Windows I have to run the command start-ssh-agent.cmd on each new terminal session I open. My development environment is VSCode, and I open a dozen new terminals each day. After each terminal open, I have to manually run this command.
Is there is a way to run this command on the terminal each time I open one ?
This may take the form of a VSCode extension, VSCode configuration (settings) or a Windows environment configuration.
Any idea?
On Linux systems you should use:
"terminal.integrated.shellArgs.linux"
On Windows and OSX:
terminal.integrated.shellArgs.windows
and
terminal.integrated.shellArgs.osx
respectively.
If you want to apply shellArgs setting on a per-workspace basis - you can, despite the fact that documentation says:
The first time you open a workspace which defines any of these settings, VS Code will warn you and subsequently always ignore the values after that
At least version 1.42 of VSCode asks you something like:
"This workspace wants to set shellArgs, do you want to allow it?"
See issue 19758
On Linux, if you are using bash (default for shell in VSCode), there are some subtleties:
"terminal.integrated.shellArgs.linux": ["your_init_script.sh"]
will execute the script and close terminal right away. To prevent this you'll have to end the script with $SHELL command.
#!/bin/bash
echo "init"
export PATH=$PATH:/xxx/yyy/zzz # or do whatever you want
$SHELL
But that way you end up in a subshell. Sometimes it's unacceptable (Read 1) (Read 2).
"terminal.integrated.shellArgs.linux": ["--init-file", "your_init_script.sh"]
will leave you in the initial shell, but will not execute the .bashrc init file. So you may want to source ~/.bashrc inside your_init_script.sh
#!/bin/bash
source ~/.bashrc
echo "init"
export PATH=$PATH:/xxx/yyy/zzz # or do whatever you want
And if you already have some_init_script.sh in a repository, and for some reason don't feel like adding source ~/.bashrc into it, you can use this:
"terminal.integrated.shellArgs.linux": ["--init-file", "your_init_script.sh"]
your_init_script.sh:
#!/bin/bash
source ~/.bashrc
source some_init_script.sh
some_init_script.sh:
#!/bin/bash
echo "init"
export PATH=$PATH:/xxx/yyy/zzz # or do whatever you want
Outside of VSCode you can do without creating extra file. Like this
bash --init-file <(echo "source ~/.bashrc; source some_init_script.sh")
But I could not pass this string into terminal.integrated.shellArgs.linux - it needs to be split into array somehow. And none of the combinations I've tried worked.
Also, you can open terminal at a specific folder:
terminal.integrated.cwd
Change env:
"terminal.integrated.env.linux"
"terminal.integrated.env.windows"
"terminal.integrated.env.osx"
And even change terminal to your liking with
terminal.integrated.shell.linux
terminal.integrated.shell.windows
terminal.integrated.shell.osx
Or
terminal.external.linuxExec
terminal.external.osxExec
terminal.external.windowsExec
I actually found a pretty neat Linux solution for this. It should also work on Windows if you use a shell like Bash. I'm not sure if it's possible using vanilla CMD.
Add something like this to your .bashrc or .zshrc:
#
# Allow parent to initialize shell
#
# This is awesome for opening terminals in VSCode.
#
if [[ -n $ZSH_INIT_COMMAND ]]; then
echo "Running: $ZSH_INIT_COMMAND"
eval "$ZSH_INIT_COMMAND"
fi
Now, in your VSCode workspace setting, you can set an environment variable like this:
"terminal.integrated.env.linux": {
"ZSH_INIT_COMMAND": "source dev-environment-setup.sh"
}
Now the script "dev-environment-setup.sh" will be automatically sourced in all new VSCode terminal windows.
You can do the following:
"terminal.integrated.shellArgs.windows": ["start-ssh-agent.cmd"]
Modified from: https://code.visualstudio.com/docs/editor/integrated-terminal#_shell-arguments
The other answer are great but a little outdated. You will get a warning in VSCode. Here is what I'm doing in my XXX.code-workspace file on linux:
"terminal.integrated.profiles.linux": {
"BashWithStartup": {
"path": "bash",
"args": [
"--init-file",
"./terminal_startup.sh"
]
}
},
"terminal.integrated.defaultProfile.linux": "BashWithStartup"
Be sure to make sure your terminal_startup.sh script is executable:
chmod u+x terminal_startup.sh
For anyone using the wonderful cmder, you'll need something akin to the following in your settings.json
{
"terminal.integrated.shell.windows": "cmd.exe",
"terminal.integrated.env.windows": {
"CMDER_ROOT": "C:\\path\\to\\cmder"
},
"terminal.integrated.shellArgs.windows": [
"/k",
"%CMDER_ROOT%\\vendor\\bin\\vscode_init.cmd"
],
}
And then you can add any aliases into the user_aliases.cmd file that should already exist in %CMDER_ROOT%\\config\\user_aliases.cmd
I use the following for powershell on Windows:
{
"terminal.integrated.shellArgs.windows": [
"-NoExit",
"-Command", "conda activate ./env"
]
}
After much trial and error, the following worked for me on OSX:
"terminal.integrated.shellArgs.osx": [
"-l",
"-c",
"source script.sh; bash"
],
For context, I'm using this with a jupyter notebook to set environment variables that cannot simply be defined using terminal.integrated.env.osx
After the April 2021 update the configuration structure has been changed.
Check the documentation.
There's a distinction even between terminal instances. To run a file on Windows:
PowerShell
{
"terminal.integrated.profiles.windows": {
"My PowerShell": {
"path": "pwsh.exe",
"args": ["-noexit", "-file", "${env:APPDATA}PowerShellmy-init-script.ps1"]
}
},
"terminal.integrated.defaultProfile.windows": "My PowerShell"
}
Command Prompt
{
"terminal.integrated.profiles.windows": {
"cmder": {
"path": "C:\\WINDOWS\\System32\\cmd.exe",
"args": ["/K", "C:\\cmder\\vendor\\bin\\vscode_init.cmd"]
}
},
"terminal.integrated.defaultProfile.windows": "cmder"
}
I did this for accessing x64 Native Tools Command Prompt for 2022:
{
"terminal.integrated.profiles.windows": {
"PowerShell": {
"source": "PowerShell",
"icon": "terminal-powershell"
},
"x64 Native": {
"path": [
"${env:windir}\\System32\\cmd.exe"
],
"args": [
"/K",
"C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Auxiliary\\Build\\vcvars64.bat",
],
"icon": "terminal-cmd"
},
},
"terminal.integrated.defaultProfile.windows": "x64 Native",
}
If you use PowerShell you can add a PowerShell Script to your your profile in which you can execute what you want. Each development environment has 4 Profiles, stored in $Profile:
AllUsersAllHosts
AllUsersCurrentHost
CurrentUserAllHosts
CurrentUserCurrentHost
Eg create a profile in VSCode with
code $profile.CurrentUserAllHosts
Some more details
If you deploy a cloudformation creating a kinesis stream how can you provide the outputs such as an arn to a lambda created in the same deployment. Does cf happen before serverless creates the lambdas and is there a way to store the cloudformation values in the lambda?
To store the Arn from your CloudFormation Template "s-resource-cf.json", add some items into the "Outputs" section.
"Outputs": {
"InsertVariableNameForLaterUse": {
"Description": "This is the Arn of My new Kinesis Stream",
"Value": {
"Fn::GetAtt": [
"InsertNameOfCfSectionToFindArnOf",
"Arn"
]
}
}
}
The Fn::GetAtt is a function in CF to get a reference from another resource being created.
When you deploy the CF Template using serverless resources deploy -s dev -r eu-west-1 the Kinesis Stream is created for that Stage/Region and the Arn will be saved into the region properties file /_meta/resources/variables/s-variables-dev-euwest1.json. Note the initial capitalisation change insertVariableNameForLaterUse.
You can then use that in the function's s-function.json as
${insertVariableNameForLaterUse}, such as the environment section:
"environment": {
"InsertVariableNameWeWantToUseInLambda": "${insertVariableNameForLaterUse}"
...
}
and reference this variable in your Lambda using something like:
var myKinesisStreamArn = process.env.InsertVariableNameWeWantToUseInLambda;
CloudFormation happens before Lambda Deployments. Though you should probably control that with a script rather than just using the dashboard:
serverless resources deploy -s dev -r eu-west-1
serverless function deploy --a -s dev -r eu-west-1
serverless endpoint deploy --a -s dev -r eu-west-1
Hope that helps.
What are the steps of deployment you are following here from Serverless? For the first part of your ask, I believe you can do a 'sls resources deploy' to deploy all CF related resources, and then you do a 'sls function deploy' OR 'sls dash deploy' to deploy the lambda functions. So technically, resource deploy (CF) does not actually deploy lambda functions.
For the second part of your ask, if you have a use-case where you want to use the output of a CF resource being created, (as of now) this feature has been added/merged to v0.5 of Serverless which has not yet been released.