I want to copy a file from host to guest while spinning a vm .. any pointers? - config

Since network is not setup on my cluster ..I figured i just copy the unxibench gzip to benchmark the vm.
But what i fail to understand is how do i copy the gzip tp vm while creating the vm ? I am using this script to create vms
server = nova.servers.create(name = vmName, image = image.id, flavor = flavor.id, nics = nics, availability_zone = availability_zone , userdata = user_data, key_name = key_pair.name )

https://kimizhang.wordpress.com/2014/03/18/how-to-inject-filemetassh-keyroot-passworduserdataconfig-drive-to-a-vm-during-nova-boot/
file injection might work
nova boot –flavor 1 –image cirros –nic
net-id=d58bbcac-1908-4cda-a9da-a13cfbbf4e77 –file
/fileinject=/root/keystonerc vm-file-inject

Related

incorrect log feed in Splunk

I have deployed a Splunk stand-alone server(also act as a deployment server) with docker and installed a forwarder on my PC, the forwarder management shows that the forwarder has connected to Splunk server. Then I tried to modify input.conf as below on Splunk server
[monitor://D:\git_web_test1\logs]
disabled=false
index=applogs
sourcetype=applogs
whitelist=*
I run splunk reload deploy-server then I can see the logs has pushed to the Splunk server,
however, I found it was pushed to the wrong index(main) and unexpected source type:
22/07/22 13:42:40.091
[2022-07-22T21:42:40.091] [INFO] default - server start at 8080.
host = DESKTOP-**** = D:\git_web_test1\logs\appsourcetype = app-too_small
I have never created this sourcetype before, do you know why this will happend?
The "-too_small" suffix is added to a sourcetype name when the sourcetype is undefined and the source does not contain enough data for Splunk to guess about the sourcetype's settings. A sourcetype is undefined if there is no props.conf entry for it on the indexer(s).
The fix is to create a sourcetype stanza in $SPLUNK_HOME/etc/system/local/props.conf on the Splunk server. It should look something like this:
[applogs]
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N
MAX_TIMESTAMP_LOOKAHEAD = 23
LINE_BREAKER = ([\r\n]+)
SHOULD_LINEMERGE = false
TRUNCATE = 10000
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+)
The most likely reason why the logs are in the wrong index is the specified index doesn't exist. It's not enough to put index=applogs in inputs.conf. The same index name must be present in indexes.conf on the indexer(s). On a standalone server the index can be created in the UI at Settings->Indexes.

How do I make an if command that changes what the script does for a function activated by 2 different tools trying to update a different leader stat

Basically I am trying to make two tools activate the same function except one tool makes the function update one leader stat while the other tool makes the funtion update a different leader stat
local remote = game.ReplicatedStorage.Give
remote.OnServerEvent:Connect(function(Player)
local plr = Player
if Activated by Starterpack.Child.Cloud then
plr.leaderstats.JumpBoost.Value = plr.leaderstats.JumpBoost.Value +10
or if Activated by Starterpack.Child.Speed then
plr.leaderstats.Speed.Value = plr.Leaderstats.Speed.Value +10
end
end)
I expected it to allow one tool to activate the same function as the other tool but change a different leader stat
RemoteEvent.FireServer let you pass any number of args when you invoke it. Have your tools each supply a different identifier, and then you can key off the identifier in RemoteEvent.OnServerEvent.
LocalScript inside Tool 1 - Cloud
local remoteGive = game.ReplicatedStorage.Give
local tool = script.Parent
tool.Equipped:Connect(function()
remoteGive:FireServer("Cloud")
end
LocalScript inside Tool 2 - Speed
local remote = game.ReplicatedStorage.Give
local tool = script.Parent
tool.Equipped:Connect(function()
remote:FireServer("Speed")
end)
Server Script
local remote = game.ReplicatedStorage.Give
remote.OnServerEvent:Connect(function(Player, toolId)
if toolId == "Cloud" then
Player.leaderstats.JumpBoost.Value = Player.leaderstats.JumpBoost.Value + 10
elseif toolId == "Speed" then
Player.leaderstats.Speed.Value = Player.Leaderstats.Speed.Value + 10
end
end)

How to run multiple Akka.NET Lighthouse seeds

On my way to use Akka.NET for a scalable application, I am trying to setup a cluster of Lighthouse seed nodes. I am testing 3 Lighthouse nodes as seed nodes, each running on the same machine with different ports. Following is my hocon config sample:
lighthouse.actorsystem: "my-system"
# See petabridge.cmd configuration options here: https://cmd.petabridge.com/articles/install/host-configuration.html
petabridge.cmd.host = "0.0.0.0"
petabridge.cmd.port = 9111/9112/9113 #one in each node
akka.actor.provider = cluster
akka.remote.log-remote-lifecycle-events = DEBUG
akka.remote.dot-netty.tcp.transport-class = "Akka.Remote.Transport.DotNetty.TcpTransport, Akka.Remote"
akka.remote.dot-netty.tcp.applied-adapters = []
akka.remote.dot-netty.tcp.transport-protocol = tcp
akka.remote.dot-netty.tcp.public-hostname = "localhost"
akka.remote.dot-netty.tcp.hostname = "localhost"
akka.remote.dot-netty.tcp.port = 4001/4002/4003
akk.cluster.seed-nodes = ["akka.tcp://my-system#localhost:4001","akka.tcp://my-system#localhost:4002","akka.tcp://my-system#localhost:4003"]
akk.cluster.roles = [lighthouse]
If I start up these nodes from 3 command prompts, each is printing the following messages:
[INFO][22-01-2019 11:45:17][Thread 0020][Cluster] Cluster Node [akka.tcp://my-system#localhost:4001/4002/4003] - Node [akka.tcp://my-system#localhost:4001/4002/4003] is JOINING itself (with roles []) and forming a new cluster
[INFO][22-01-2019 11:45:17][Thread 0020][Cluster] Cluster Node [akka.tcp://my-system#localhost:4001/4002/4003] - Leader is moving node [akka.tcp://my-system#localhost:4001/4002/4003] to [Up]
My concern here is that, as per the logs printed, these three instances are not forming a cluster and seems to be forming three separate clusters as the nodes themselves are not getting any message about other Lighthouse nodes.
Can somebody please clarify if this is the expected behavior as there is no example seems to be available online.

S3 path error with Flume HDFS Sink

I have a Flume consolidator which writes every entry on a S3 bucket on AWS.
The problem is with the directory path.
The events are supposed to be written on /flume/events/%y-%m-%d/%H%M, but they're on //flume/events/%y-%m-%d/%H%M.
It seems that Flume is appending one more "/" at the beginning.
Any ideas for this issue? Is that a problem with my path configuration?
master.sources = source1
master.sinks = sink1
master.channels = channel1
master.sources.source1.type = netcat
# master.sources.source1.type = avro
master.sources.source1.bind = 0.0.0.0
master.sources.source1.port = 4555
master.sources.source1.interceptors = inter1
master.sources.source1.interceptors.inter1.type = timestamp
master.sinks.sink1.type = hdfs
master.sinks.sink1.hdfs.path = s3://KEY:SECRET#BUCKET/flume/events/%y-%m-%d/%H%M
master.sinks.sink1.hdfs.filePrefix = event
master.sinks.sink1.hdfs.round = true
master.sinks.sink1.hdfs.roundValue = 5
master.sinks.sink1.hdfs.roundUnit = minute
master.channels.channel1.type = memory
master.channels.channel1.capacity = 1000
master.channels.channel1.transactionCapactiy = 100
master.sources.source1.channels = channel1
master.sinks.sink1.channel = channel1
The Flume NG HDFS sink doesn't implement anything special for S3 support. Hadoop has some built-in support for S3, but I don't know of anyone actively working on it. From what I have heard, it is somewhat out of date and may have some durability issues under failure.
That said, I know of people using it because it's "good enough".
Are you saying that "//xyz" (with multiple adjacent slashes) is a valid path name on S3? As you probably know, most Unixes collapse adjacent slashes.

How to monitor GlassFish thread pool via asadmin interface

I'm trying to use the asadmin interface to monitor a thread-pool on GlassFish 3.1.1. I'm executing the following command:
asadmin get -m server.network.my-listener.thread-pool.*
and I'm getting data back, but most of it has lastsampletime = -1 (so the related data is zero; and is worthless).
Note: I've also tried the REST interface, which I believe asadmin delegates to, and the JMX interface. Same problem: much of the data has lastsampletime = -1.
I've already turned monitoring to HIGH for all modules. What am I missing?
It seems like redeploying my application was necessary for the monitoring to actually get values. Perhaps I interpreted the manual incorrectly but it seems to suggest that a restart/redeploy wouldn't be required:
Oracle GlassFish Server 3.1 Administration Guide
Also, it is weird that the following shows there is no monitoring data:
asadmin get -m server.thread-pools.thread-pool.http-thread-pool.*
Instead you must go through a specific network listener like:
asadmin get -m server.network.http-listener-2.thread-pool.*
It also took me by surprise that enabling thread-pool monitoring IS NOT enough to see thread pool statistics. You must also enable http-service monitoring:
asadmin enable-monitoring
asadmin set server.monitoring-service.module-monitoring-levels.thread-pool=HIGH
asadmin set server.monitoring-service.module-monitoring-levels.http-service=HIGH
That's all you should need to do.
Enable monitoring, set to HIGH, for the http-service module on the DAS, stand-alone instance, or cluster you want to monitor.
Deploy an app to the DAS, stand-alone instance, or cluster and make http-requests.
asadmin get -m *instancename*.network.*listener*.thread-pool.*
Looks like you are monitoring DAS, since you are using asadmin get -m server.network.my-listener.thread-pool.*.
I deployed a simple war to DAS and made a bunch of http requests. I see the corethreads-count and maxthreads-count have last sample time as -1. And the remaining statistics have actual last sample times.
asadmin get -m "server.network.http-listener-1.thread-pool.*"
server.network.http-listener-1.thread-pool.corethreads-count = 0
server.network.http-listener-1.thread-pool.corethreads-description = Core number of threads in the thread pool
server.network.http-listener-1.thread-pool.corethreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.corethreads-name = CoreThreads
server.network.http-listener-1.thread-pool.corethreads-starttime = 1320764890444
server.network.http-listener-1.thread-pool.corethreads-unit = count
server.network.http-listener-1.thread-pool.currentthreadcount-count = 5
server.network.http-listener-1.thread-pool.currentthreadcount-description = Provides the number of request processing threads currently in the listener thread pool
server.network.http-listener-1.thread-pool.currentthreadcount-lastsampletime = 1320765351708
server.network.http-listener-1.thread-pool.currentthreadcount-name = CurrentThreadCount
server.network.http-listener-1.thread-pool.currentthreadcount-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadcount-unit = count
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 0
server.network.http-listener-1.thread-pool.currentthreadsbusy-description = Provides the number of request processing threads currently in use in the listener thread pool serving requests
server.network.http-listener-1.thread-pool.currentthreadsbusy-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.currentthreadsbusy-name = CurrentThreadsBusy
server.network.http-listener-1.thread-pool.currentthreadsbusy-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadsbusy-unit = count
server.network.http-listener-1.thread-pool.dotted-name = server.network.http-listener-1.thread-pool
server.network.http-listener-1.thread-pool.maxthreads-count = 0
server.network.http-listener-1.thread-pool.maxthreads-description = Maximum number of threads allowed in the thread pool
server.network.http-listener-1.thread-pool.maxthreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.maxthreads-name = MaxThreads
server.network.http-listener-1.thread-pool.maxthreads-starttime = 1320764890443
server.network.http-listener-1.thread-pool.maxthreads-unit = count
server.network.http-listener-1.thread-pool.totalexecutedtasks-count = 31
server.network.http-listener-1.thread-pool.totalexecutedtasks-description = Provides the total number of tasks, which were executed by the thread pool
server.network.http-listener-1.thread-pool.totalexecutedtasks-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.totalexecutedtasks-name = TotalExecutedTasksCount
server.network.http-listener-1.thread-pool.totalexecutedtasks-starttime = 1320764890444
server.network.http-listener-1.thread-pool.totalexecutedtasks-unit = count
Command get executed successfully.
To instantly enable monitoring without restart use enable-monitoring command
enable-monitoring
enable-monitoring --modules jvm=LOW
enable-monitoring --modules thread-pool=HIGH
enable-monitoring --modules http-service=HIGH
enable-monitoring --modules jdbc-connection-pool=HIGH
The trick is that thread-pool and http-service modules must have high level to get monitoring info.
For more info refer https://docs.oracle.com/cd/E26576_01/doc.312/e24928/monitoring.htm#GSADG00558