I cant login to cloned image on Rackspace - authentication

I have an instance in my account that I can login to ssh with port 27891. When I clone it ( make an image and from an image a server) I can´t access it with the password Rackspace gave me to root.
I´ve tried with port 22 and with port 27891 and neither work.
I´ll appreciate help!!
Thanks..

Cloned servers don't have the same password. You'll need to grab the admin password either from the UI or the API you're using. If you need to, you can always change the password as well:
On the other hand, I highly recommend using SSH keypairs with Rackspace boxes. You'll need to use the API (or one of the SDKs), but it makes it much easier to manage the boxes. Additionally, the authorized_keys file will be on each of the cloned images.
Example cloning code, using pyrax (the Python library for Rackspace):
# Authenticate with Rackspace -- also works against OpenStack/Keystone
pyrax.set_setting("identity_type", "rackspace")
pyrax.set_credential_file(os.path.expanduser("~/.rax_creds"))
cs = pyrax.connect_to_cloudservers(region="DFW")
# Could use the API to get image IDs
# This one is for Ubuntu 13.04 (Raring Ringtail) (PVHVM beta)
image = u'62df001e-87ee-407c-b042-6f4e13f5d7e1'
flavor = u'performance1-1'
# Create a server (solely to make an image out of it, for this example)
base_server = cs.servers.create("makeanimageoutofme",
image,
flavor,
key_name="rgbkrk")
# It takes a little while to create the server, so we poll it until it completes
base_server = pyrax.utils.wait_for_build(base_server, verbose=True)
# Create image
im = base_server.create_image("base_image")
image = cs.images.get(im)
# Wait for image creation to finish
image = pyrax.utils.wait_until(image, "status", "ACTIVE", attempts=0)
# Note that a clone can be a bigger flavor (VM size) if needed as well
# Here we use the flavor of the base server
clone_server = cs.servers.create(name="iamtheclone",
image=image.id,
flavor=base_server.flavor['id'])
clone_server = pyrax.utils.wait_for_build(clone_server, verbose=True)
Within each of the created machines, the generated credentials are in base_server.adminPass and clone_server.adminPass. To access the box, use base_server.accessIPv4 and clone_server.accessIPv4.
However, I highly recommend using SSH keypairs.

Related

Can someone please tell me how to define a check_disk service with check_nrpe in icinga 2?

I'm trying to check disk status of client ubuntu 16.04 instance using icinga2 master server. In here I tried to use nrpe plugin for check disk status. I faced trouble When I'm going to define service in service.conf file. Please, can someone tell me what the correct files that should be changed when using nrpe are. Because I'm new to Icinga and nrpe.
I was able to find the solution to my problem. I hope to put it here because It may help someone's need.
Here I carried check_load example to the explain.
First of all, you need to create .conf file (name: 192.168.30.40-host.conf)regarding the client-server that you are going to monitor using icinga2. It should be placed on /etc/icinga2/conf.d/ folder
/etc/icinga2/conf.d/192.168.30.40-host.conf
object Host "host1" {
import "generic-host"
display_name = "host1"
address = "192.168.30.40"
}
you should create a service file for your client.
/etc/icinga2/conf.d/192.168.30.40-service.conf
object Service "LOAD AVERAGE" {
import "generic-service"
host_name = "host1"
check_command = "nrpe"
vars.nrpe_command = "check_load"
}
This is an important part of the problem. You should add this line to your nrpe.cfg file in Nagios server.
/etc/nagios/nrpe.cfg file
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 20,15,10
4.make sure to restart icinga2 and Nagios servers after making any change.
You could also use an icinga2 agent instead of nrpe. The agent will be able to receive its configuration from a master or satellite, and perform local checks on the server.

How do make an SSL Connection from a Kong serverless function using a client certificate

I'm trying to create a serverless function for Kong for authentication purposes. I'm required to use a client certificate to authenticate with the remote service that we have to use. I can't seem to get this working and there appears to be no clear documentation on how to do this. I've tried pintsized/lua-resty-http, ngx.socket.tcp(), and luacurl (failed to build) without success. I'm using the newest version of Kong in an Alpine Linux container in case that matters.
What is the best way to do this? Right now I'm considering simply calling curl from within Lua as I know that works, but I was hoping for a better solution that I can do with just Lua/OpenResty.
Thanks.
UPDATE: I just wanted to add, just in case it helps, that I'm already building a new image based on the official Kong one as I had to modify the nginx configuration templates, so installing new software into the container is not an issue.
All,
Apologies for the ugly code, but it looks like a found an answer that works:
require("socket")
local currUrl= "https://some.url/"
local https = require("ssl.https")
local ltn12 = require("ltn12")
local chunks = {}
local body, code, headers, status = https.request{
mode = "client",
url = currUrl,
protocol = "tlsv1_2",
certificate = "/certs/bundle.crt",
key = "/certs/bundle.key",
verify = "none",
sink = ltn12.sink.table(chunks),
}
If someone has a better answer, I'd appreciate it, but it's hard to complain about this one. The main issue is that while this works for a GET request, I'll be wanting to do POSTs to a service in a future and I have no idea how to do it using similar code. I'd like one libary/API that can do any type of REST request.
This blog got me on the right track: http://notebook.kulchenko.com/programming/https-ssl-calls-with-lua-and-luasec

Enable Cloud Vision API to access a file on Cloud Storage

i have already seen there are some similar questions but none of them actually provide a full answer.
Since I cannot comment in that thread, i am opening a new one.
How do I address Brandon's comment below?
"...
In order to use the Cloud Vision API with a non-public GCS object,
you'll need to send OAuth authentication information along with your
request for a user or service account which has permission to read the
GCS object."?
I have the json file the system gave me as described here when I created the service account.
I am trying to run the api from a python script.
It is not clear how to use it.
I'd recommend to use the Vision API Client Library for python to perform the call. You can install it on your machine (ideally in a virtualenv) by running the following command:
pip install --upgrade google-cloud-vision
Next, You'll need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. For example, on a Linux machine you'd do it like this:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Finally, you'll just have to call the Vision API client's method you desire (for example here the label_detection method) like so:
def detect_labels():
"""Detects labels in the file located in Google Cloud Storage."""
client = vision.ImageAnnotatorClient()
image = types.Image()
image.source.image_uri = "gs://bucket_name/path_to_image_object"
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
By initialyzing the client with no parameter, the library will automatically look for the GOOGLE_APPLICATION_CREDENTIALS environment variable you've previously set and run on behalf of this service account. If you granted it permissions to access the file, it'll run successfully.

mininet connect to remote ODL controller with python code

I'm new to mininet, I want to see the network topology using opendaylight(carbon) controller. I have tried command:
sudo mn --topo linear,3 --mac \
--controller=remote,ip=10.109.253.152,port=6633 \
--switch ovs,protocols=OpenFlow13,stp=1
And the opendaylight can successfully show the whole topology. And Then, I want to show the same result by using python code solely. However, it doesn't work.
#!/usr/bin/python
from mininet.net import Mininet
from mininet.node import RemoteController, OVSSwitch
from mininet.log import info, setLogLevel
from mininet.cli import CLI
def RemoteCon():
net = Mininet(controller=RemoteController, switch=OVSSwitch)
c1 = net.addController('c1', ip='10.109.253.152',port=6633)
h1 = net.addHost('h1')
h2 = net.addHost('h2')
s1 = net.addSwitch('s1')
net.addLink(s1, h1)
net.addLink(s1, h2)
net.build()
net.start()
CLI(net)
net.stop()
if __name__ == '__main__':
setLogLevel('info')
RemoteCon()
Oh, by the way, does the switches have default forwarding functionality? Sometimes, I have hosts and switch connected to each other and hosts can ping each other while running above code, h1 cannot ping h2 and vice versa.
Thanks in advance.
I'm assuming you are using the l2switch feature in OpenDaylight.
if you search this forum, you'll find others complaining of inconsistent
connectivity when using l2switch. You are probably hitting bugs, but
after a restart of OpenDaylight, it might be ok. By default, with l2switch
it should learn the links of the topology, and create the flows to allow
all hosts to ping every other host.
as for your python script to run mininet, I don't see anything obvious.
Can you look in the OpenDaylight karaf.log for any clues? Or check the
OVS logs for other clues? If you are just simply not seeing anything
in the topology viewer, then my guess is that the OVS is not connecting
to OpenDaylight at all.
One thing to double check. I don't know how the python script is deciding
which openflow version to use, but maybe it's using 1.0 and that's the
big difference from your command line, which sets it to 1.3?
I see that you missed starting your switch to communicate with the controller. Try
s1.start([c1])
This defines which controller the switch is connected to. Hope this helps.
You should give protocols parameter to addSwitch function as command line:
s1 = net.addSwitch('s1',switch=OVSSwitch,protocols='OpenFlow10')

dcm4che dcmqr cmove is writing to the default storage directory instead of the directory path specified in the -cstoredest option

I am trying to run this command:
dcmqr -L SEHATYPACS:104 -cmove SEHATYPACS -I DICOM_QR_SCP#10.221.21.111:7840 -qStudyInstanceUID=1.2.124.113532.10.210.5.12.20090321.114039.3256166 -cstore 1.2.840.10008.5.1.4.1.1.3.1 -cstoredest C:\temp2\X-RAY
The command executes successfully and brings 3 matches. However, the moved images are not written to the storage directory specified in -cstoredest option. Instead, the images are written to the default storage directory defined in the JBoss JMX Management Console under "group=ONLINE_STORAGE,service=FileSystemMgt".
As you notice the command is using the -cstore option using the SOP Class UID of the image.
I need those images to override the default storage directory and use the one in the -cstoredest option.
Could you please help?
It sounds like you have a DCM4CHEE archive running on this machine with an AE of SEHATYPACS?
I expect, the AE you are requesting from (DICOM_QR_SCP) already has the AE title SEHATYPACS defined and pointing to port 11112 (or whatever you have DCM4CHEE configured to use).
I think you have a couple options: one, stop DCM4CHEE and use its port number for your listener in the command you have (dcmqr -L SEHATYPACS:11112 ...); or two, use a different AE title for your -L flag (this likely will require you to define that AE in your DICOM_QR_SCP location).