Running scripts in deployment manager - scripting

How do I run script along side a cluster instance that I am creating for configuring sql proxy using Google Deployment Manager ?

Startup scripts are not allowed to be specified for GKE nodes. From https://cloud.google.com/container-engine/reference/rest/v1/NodeConfig:
Additionally, to avoid ambiguity, keys must not conflict with any other metadata keys for the project or be one of the four reserved keys: "instance-template", "kube-env", "startup-script", and "user-data"
Take a look at https://github.com/kubernetes/contrib/tree/master/startup-script for a way to replace the functionality of startup scripts with a daemonset.

Related

Release pipeline conflict with integration runtime

This question relates to how to propagate a data factory through CI (in VSTS) if there is a self hosted Integration Runtime defined in the Data Factory.
I have a 3 environments set up - Dev / UAT / Prod each with their own data factory.
The Dev hosts the master collaboration branch. I am using VSTS to retrieve the artifacts from the adf_publish branch and deploying the template to UAT (prod will be done later). I followed much of what is in this guide here.
When deploying to blank UAT with a self-hosted integration runtime (IR), the IR that is deployed in UAT is a copy of the shared IR from dev (not a linked type) and this causes an error since the credentials used by the IR will not be correct. I expect this since we are really just deploying an exact copy of the Resource Group template with just the factory name overridden however the IR will not work without it being re-credentialled with the self hosted IR VMs.
If I pre-register a linked IR with the UAT environment (linked to the dev IRs), then the deployment fails with a conflict because an IR in the resource group template is the same name as the one I just created in UAT. If it is a different name - no conflict but the linked services will be pointing to the template IR and not the one I created for UAT
The docs have a note that says the IR runtime should be the same across all the platforms but I do not think this can be true - one of them (presumably the source/dev) must be a shared type and the others linked and authorized.
One option I could see (untested) is to have each environments IR reference be a separate connection to an actual IR but then there then needs to be some way of overriding the linked services to point to the current environments IR reference (by template parameter override?). In this scenario, we need to block the templates IR from being deployed as it won't be needed and won't work.
Has anyone had success in getting CI working in this situation? My sense is the doc was written with the globally shared IR. Either that or I need to better understand the aim of Auto Integration setting in the linked services definition.
Many thanks.
Mark.
Update
I think there are a couple of bugs in the service so not expecting an answer. I'll post updates here if I see resolution from the bug report I have posted here for the dev group.
In a nutshell, this only affects you if
you have a self hosted integration runtime (IR), and
you are trying to deploy a template to a new data factory from an existing data factory (as you would in Dev->UAT->Prod)
you have a datalake (ADL) linked service defined and using the self hosted IR.
If you have a self hosted IR in the template, the newly deployed copy will not be registered with any server (either linked or unique to the new ADF) as the template only records an IR, it does not instantiate one.
While this can be fixed in post deployment config or scripting, what it can't fix is the dependency in ADL. This is because the ADL linked service wants to encrypt the service principal with the IR....but the IR does not exist at the time of template deployment (i.e. is not configured on a server and not active).
It is no better if you select Managed service identity as the auth on the ADL linked service instead of service principal, then the template fails to deploy because there are no credentials to encrypt and it looks like the resource is expecting to encrypt something.
The work around right now is to use Azure hosted IR for datalake connections. Unfortunately for us this causes a security problem because shared IRs cannot be whitelisted in our ADL Gen 1.
I'll keep you posted.

How to deploy an atg project in weblogic?

I created a simple project using ATG 10.2 .I want to know how to deploy it in weblogic. Please provide detailed procedure with screenshots,if possible.
To provide a 'detailed' procedure is beyond the scope of what StackOverflow is trying to provide. That said, if you have an understanding of the Weblogic Management Console you should be able to follow these steps to setup your initial deployment:
Create a Server
1.1 Specify a server name (eg. commerce) and the port number this server will run on (eg. 8180). Select it as a 'Stand-alone server'.
1.2 Once created go to Configuration > Server Start for the newly created server and modify the 'Arguments' block and include the following setings (assuming you are running windows, for Unix update your own paths)
-Datg.dynamo.data-dir=c:\ATG-Data -Datg.dynamo.server.name=commerce -d64 -XX:ParallelGCThreads=8 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Xms1152m -Xmx2048m -XX:NewSize=128m -XX:MaxNewSize=256m -XX:PermSize=128m -XX:MaxPermSize=256m
1.3 Save your Server
Create DataSources
2.1 In the Console click on 'Services > Data Sources'
2.2 Create 'New' datasources for each of your connections. As a minimum you will need connections for ATGSwitchingDS_A, ATGSwitchingDS_B (Assuming you are doing switching datasources) and ATGProductionDS. These names should match your JNDI names in your property files. Remember to specify the 'commerce' server as the target for each of the datasources.
Create Deployment
3.1 Assuming you've already built your EAR (eg. ATGProduction.ear) and it is available in c:\deployments you need to create a deployment in Weblogic. You need to create the deployment in the console and specify the target as 'commerce'. Once done you need to also 'start serving requests' on the deployment.
Start Server
You should now be able to see your server running on port 8180 with the log files being written to c:\ATG-Data\servers\commerce\logs.
If after this things aren't running, post specific questions about your issues and someone here might be able to help you.

unable to delete osb_server1 in the osb 10.3.6.0

There are scripts that build the admin server, then create clusters, managed servers, machines etc and when this domain is built, it is seen that an additional phantom server osb_server1 with port 8011, is getting built that isn't attached to any cluster or any machine.
This is built when the wlsb.jar was being referenced during one of the scripts.
Once after the admin server is up and running and we have other managed servers as well, Was trying to remove osb_server1 and this error creeps up
weblogic.management.configuration.AppDeploymentMBeanImpl.isCacheInAppDirectorySet()
Errors must be corrected before processding
There are like 120 default deployments on OSB that are targeted to osb_server1, was trying to retarget them to another server, but that is also throwing an error ...
Any ideas ???
That's due to the weird behaviour/bug of the standard osb template. There is a discussion here. http://theheat.dk/blog/?p=1255.
I didnt follow the steps given by Oracle(as in the URL). What I did was,
I keep the default osb_server1, and make it part of the cluster during the domain creation(ie, it's the first server). Once the domain is created, I re-set the osb_server1 to the desired value. That way the singleton services will still be deployed to the 1st server and others to cluster. Using WLST:
readDomain(domain_name)
cd('/Servers/osb_server1')
set('ListenPort', osb1_listen_port)
set('Name', osb1_name)
cd('/Servers/' + osb1_name + '/ServerDiagnosticConfig/osb_server1')
set('Name', osb1_name)
updateDomain()
closeDomain()

How can I create a new WebLogic managed server and add it to the cluster "dynamically"?

I want to be able to "spin up" additional WebLogic 11g managed servers in a cluster to add additional horizontal instances.
The use case is that I start with a WebLogic cluster that has X instances and I want to add an additional instance dynamically i.e. using a script and using a live system (without any downtime).
What can I use (e.g. WLST?) to create the additional managed server, configure it e.g. add it to the domain? Has anyone done this?
here is an example.
You can do a svr.setCluster and svr.setMachine
Here is an example with WLST:
Create:
connect('username','password')
edit()
startEdit()
cmo.createServer('managed1')
cd('Servers/managed1')
Add server to cluster (after cd to server added above)
cmo.setListenPort(ListenPort)
cmo.setCluster("MyCluster")
cmo.setMachine(Machinename)
Reaching/creating a cluster in WLST works in a similar way:
cd('/')
cmo.createCluster('MyCluster')
cd('/Clusters/MyCluster')
cmo.setClusterAddress('myhost:port,myhost2:port')
cmo.setClusterMessagingMode('unicast')

weblogic clustering.. setting parameters through startup script for each server

I need to setup separate parameters for each server during the startup. so application inside the server will use these parameters. Application will be deployed as an EAR file.
What is the easiest way to do this? weblogic 10.3.2
Typically what I have done is the reverse of JoseK - have a script called 'startms#.sh' and in that script you can set/export JAVA_OPTIONS and then call startManagedWebLogic.sh. They should be propogated down to the startManagedWebLogic.sh script unless you have changed that script.
Have the startManagedWeblogic.sh command run a separate sh internally.
That should contain your specific params (one example below)
JAVA_OPTIONS="$JAVA_OPTIONS -Dweblogic.configuration.schemaValidationEnabled=false"
export JAVA_OPTIONS
And set these on each server as required