For a new VM I need to set the property enableAutomaticUpdates = false using AZ Powershell - azure-powershell

Trying to create a new VM in Azure using AZ PowerShell.
I want to disable automatic patching and it can be done setting this property:
$vmObj = Set-AzVMOperatingSystem -VM $vmObj -patchMode "Manual" ..
When finally calling New-AzVM ... to create the VM after setting all properties (among others using Set-AzVMOperatingSystem ...) I get this error.
New-AzVM : The patchMode 'Manual' is invalid. For patchMode 'Manual', the property 'enableAutomaticUpdates' must be set to false.
ErrorCode: InvalidParameter
So I have to set the property enableAutomaticUpdates = false which I have not been able to set using PowerShell.
The only cmdlet where I can find this property is Set-AzVmssOsProfile but I don't want to create a Virtual Machine Scale Set.
Looking at the template.json you get when creating a VM by portal-GUI this property is set like this
...
"osProfile": {
"computerName": "[parameters('virtualMachineComputerName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": false,
"provisionVmAgent": true,
"patchSettings": {
"enableHotpatching": "[parameters('enableHotpatching')]",
"patchMode": "[parameters('patchMode')]"
}
}
},
...
Is there a way to set this using PowerShell?
I guess publish the whole PS script will just be noise and it's rather large, but if you think it's valuable I gladly do.

Working on the same issue.
Try this with Set-AzVMOperatingSystem:
-EnableAutoUpdate:$false
(it's a switch parameter)
like this:
$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate:$false -PatchMode Manual
Probably -PatchMode is not necessary if you set EnableAutoUpdate to $false. Read about this while googling but cannot find the article right now :-)

It's not possible to do this for VMs that were created without that property set in the first place. The EnableAutoUpdate property can only be set during the creation of the VM. It is documented in MS doc.
Solution is to create a completely new VM. Then, check the VM template definition (JSON view of the VM in the overview page) and make sure that the EnableAutoUpdate property is set to true.
Next, take a snapshot of the OS disk of the old VM. Create a new managed disk from the snapshot. Then, in the new VM, go to Disk and do a OS disk swap. I've just done this today and I

Related

Terraform/GCP: ssh-keys not being added to metdata

I'm trying to add ssh-keys to my Google Cloud project at the project level with terraform:
resource "google_compute_project_metadata_item" "oslogin" {
project = "${google_project_services.myproject.project}"
key = "enable-oslogin"
value = "false"
}
resource "google_compute_project_metadata_item" "block-project-ssh-keys" {
project = "${google_project_services.myproject.project}"
key = "block-project-ssh-keys"
value = "false"
}
resource "google_compute_project_metadata_item" "ssh-keys" {
key = "ssh-keys"
value = "user:ssh-rsa myverylongpublickeythatireplacewithtexthereforobviousreasons user#computer.local"
depends_on = [
"google_project_services.myproject",
]
}
I tried all types of combinations of the 2 metadata flags oslogin and block-project-ssh-keys, which always get set without issues. But the ssh keys never appear in GCPs web GUI let alone the authorized_keys file. I even tried adding the depends_on, to make sure the project is existent before adding the keys, but that didn't help either.
Yet, Terraform says:
google_compute_project_metadata_item.ssh-keys: Creation complete after 8s (ID: ssh-keys)
Adding the exact same key manually on the web GUI works fine. At this point I believe I have tried everything, read all the first page Google results to 'terraform gcp add ssh key' and similar queries ... I'm at my wits end.
The issue was that the ssh key was being added to a different project.
I started with Google's tutorial on GCP/Terraform. This creates a generic project with the gcloud tool first. Then proceeds to create accounts using that generic project. This is necessary because you need a user to run terraform against their API. Then they create a new project facilitating these users with terraform each time you apply. The generic project created with gcloud is not being touched after the initial creation.
If you omit the "project" parameter from the google_compute_project_metadata_item.ssh-keys resource, it used the generic project and added the ssh keys there - at least in my case.
Solution: explicitly add the project parameter to the metadata resource item to make sure it's being added to the right project

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

RavenDB studio "no databases and no file systems" in embedded mode

I could not find any information about this online after much searching, any help would be appreciated.
I create my EmbeddableDocumentStore and everything seems to be working fine, my application is using the database. However, when I access the management studio using my port number 5050, it states "No databases and no filesystems are available".
RavenDB.Client and RavenDB.Database nuget package versions are 3.0.3800.
var store = new EmbeddableDocumentStore
{
DataDirectory = "Data",
UseEmbeddedHttpServer = true
};
store.Configuration.Port = 5050;
store.Initialize();
While writing the question I had an idea and tried it. This resolved the issue but I thought I would post this for reference in case someone had a similar problem.
I could not see this in the docs (http://ravendb.net/docs/article-page/3.0/csharp/server/installation/embedded) but in order to access the resource I had to give it a name.
var store = new EmbeddableDocumentStore
{
DataDirectory = "Data",
UseEmbeddedHttpServer = true,
DefaultDatabase = "Default"
};
Now it shows up in the RavenDB studio.
Not really an issue, but by default you'll be connected to the database. You can still get to this database in the Admin Studio.
Go to "Manage My Server", "To system database" then accept the warning message.
Obviously it shouldn't be used like this but useful if you need to recover data accidentally saved here.
Setting the "DefaultDatabase" property as you've done is the correct solution.

this command is not available unless the connection is created with admin-commands enabled

When trying to run the following in Redis using booksleeve.
using (var conn = new RedisConnection(server, port, -1, password))
{
var result = conn.Server.FlushDb(0);
result.Wait();
}
I get an error saying:
This command is not available unless the connection is created with
admin-commands enabled"
I am not sure how do i execute commands as admin? Do I need to create an a/c in db with admin access and login with that?
Updated answer for StackExchange.Redis:
var conn = ConnectionMultiplexer.Connect("localhost,allowAdmin=true");
Note also that the object created here should be created once per application and shared as a global singleton, per Marc:
Because the ConnectionMultiplexer does a lot, it is designed to be
shared and reused between callers. You should not create a
ConnectionMultiplexer per operation. It is fully thread-safe and ready
for this usage.
Basically, the dangerous commands that you don't need in routine operations, but which can cause lots of problems if used inappropriately (i.e. the equivalent of drop database in tsql, since your example is FlushDb) are protected by a "yes, I meant to do that..." flag:
using (var conn = new RedisConnection(server, port, -1, password,
allowAdmin: true)) <==== here
I will improve the error message to make this very clear and explicit.
You can also set this in C# when you're creating your multiplexer - set AllowAdmin = true
private ConnectionMultiplexer GetConnectionMultiplexer()
{
var options = ConfigurationOptions.Parse("localhost:6379");
options.ConnectRetry = 5;
options.AllowAdmin = true;
return ConnectionMultiplexer.Connect(options);
}
For those who like me faced the error:
StackExchange.Redis.RedisCommandException: This operation is not
available unless admin mode is enabled: ROLE
after upgrading StackExchange.Redis to version 2.2.4 with Sentinel connection: it's a known bug, the workaround was either to downgrade the client back or to add allowAdmin=true to the connection string and wait for the fix.
Starting from 2.2.50 public release the issue is fixed.