How do I make an if command that changes what the script does for a function activated by 2 different tools trying to update a different leader stat - scripting

Basically I am trying to make two tools activate the same function except one tool makes the function update one leader stat while the other tool makes the funtion update a different leader stat
local remote = game.ReplicatedStorage.Give
remote.OnServerEvent:Connect(function(Player)
local plr = Player
if Activated by Starterpack.Child.Cloud then
plr.leaderstats.JumpBoost.Value = plr.leaderstats.JumpBoost.Value +10
or if Activated by Starterpack.Child.Speed then
plr.leaderstats.Speed.Value = plr.Leaderstats.Speed.Value +10
end
end)
I expected it to allow one tool to activate the same function as the other tool but change a different leader stat

RemoteEvent.FireServer let you pass any number of args when you invoke it. Have your tools each supply a different identifier, and then you can key off the identifier in RemoteEvent.OnServerEvent.
LocalScript inside Tool 1 - Cloud
local remoteGive = game.ReplicatedStorage.Give
local tool = script.Parent
tool.Equipped:Connect(function()
remoteGive:FireServer("Cloud")
end
LocalScript inside Tool 2 - Speed
local remote = game.ReplicatedStorage.Give
local tool = script.Parent
tool.Equipped:Connect(function()
remote:FireServer("Speed")
end)
Server Script
local remote = game.ReplicatedStorage.Give
remote.OnServerEvent:Connect(function(Player, toolId)
if toolId == "Cloud" then
Player.leaderstats.JumpBoost.Value = Player.leaderstats.JumpBoost.Value + 10
elseif toolId == "Speed" then
Player.leaderstats.Speed.Value = Player.Leaderstats.Speed.Value + 10
end
end)

Related

Troubleshooting failed VASP calculations in pyiron

I currently have failed calculations in a project that return a status of "aborted" in the jobtable generated by
proj_df = pr.job_table();
proj_df[proj_df["status"] == "aborted"].
How do I loop-restart these calculations with modified input parameters? (i.e. a modified INCAR?)
Also, does pyiron support detailed error reporting on the notebook side or is it necessary to look at the raw output files in the project folder in the terminal?
To restart jobs with a different parameter do:
CHANGED_KPAR = 10
for sub_job in pr.iter_jobs():
if sub_job.status.aborted:
sub_job.input.incar['KPAR'] = CHANGED_KPAR
sub_job.input.incar['SYSTEM'] = sub_job.name
sub_job.input.incar['SIGMA'] = 0.2
sub_job.server.queue = "cmti"
sub_job.server.cores = NCPU
sub_job.executable = "5.4.4_mpi_AutoReconverge"
sub_job.run(delete_existing_job=True)

incorrect log feed in Splunk

I have deployed a Splunk stand-alone server(also act as a deployment server) with docker and installed a forwarder on my PC, the forwarder management shows that the forwarder has connected to Splunk server. Then I tried to modify input.conf as below on Splunk server
[monitor://D:\git_web_test1\logs]
disabled=false
index=applogs
sourcetype=applogs
whitelist=*
I run splunk reload deploy-server then I can see the logs has pushed to the Splunk server,
however, I found it was pushed to the wrong index(main) and unexpected source type:
22/07/22 13:42:40.091
[2022-07-22T21:42:40.091] [INFO] default - server start at 8080.
host = DESKTOP-**** = D:\git_web_test1\logs\appsourcetype = app-too_small
I have never created this sourcetype before, do you know why this will happend?
The "-too_small" suffix is added to a sourcetype name when the sourcetype is undefined and the source does not contain enough data for Splunk to guess about the sourcetype's settings. A sourcetype is undefined if there is no props.conf entry for it on the indexer(s).
The fix is to create a sourcetype stanza in $SPLUNK_HOME/etc/system/local/props.conf on the Splunk server. It should look something like this:
[applogs]
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N
MAX_TIMESTAMP_LOOKAHEAD = 23
LINE_BREAKER = ([\r\n]+)
SHOULD_LINEMERGE = false
TRUNCATE = 10000
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+)
The most likely reason why the logs are in the wrong index is the specified index doesn't exist. It's not enough to put index=applogs in inputs.conf. The same index name must be present in indexes.conf on the indexer(s). On a standalone server the index can be created in the UI at Settings->Indexes.

Payara asadmin command to monitor a specific resource

Does anyone know the asadmin command line equivalent to display the Resource data as shown in the image below (ie the Resource __TimerPool)?
I'm using Payara 4.1.1.171.1.
I typed asadmin monitor --help and it provided this as
monitor [--help]
--type type
[--filename filename]
[--interval interval]
[--filter filter]
instance-name
The type field only accepts "httplistener", "jvm" and "webmodule" as inputs.
So I can't use a "resource" or "jdbcpool" as a type.
Oddly enough in the old glassfish 2.1 https://docs.oracle.com/cd/E19879-01/821-0185/gelol/index.html you can select "jdbcpool" as the type
Any help is appreciated.
I couldn't really find the answer on the payara documentation https://docs.payara.fish/documentation/payara-server/monitoring-service/monitoring-service.html
But using part of the glassfish documentation https://docs.oracle.com/cd/E18930_01/html/821-2416/ghmct.html#gipzv I was able to get what I needed.
The command is asadmin get --monitor server.resources.__TimerPool.*
This then returns (this is a partial output):
server.resources.__TimerPool.numconnused-highwatermark = 2
server.resources.__TimerPool.numconnused-lastsampletime =
1559826720029 server.resources.__TimerPool.numconnused-lowwatermark =
0 server.resources.__TimerPool.numconnused-name = NumConnUsed
server.resources.__TimerPool.numconnused-starttime = 1559823838730
server.resources.__TimerPool.numconnused-unit = count
server.resources.__TimerPool.numpotentialconnleak-count = 0
server.resources.__TimerPool.numpotentialconnleak-description = Number
of potential connection leaks
server.resources.__TimerPool.numpotentialconnleak-lastsampletime = -1
server.resources.__TimerPool.numpotentialconnleak-name =
NumPotentialConnLeak
server.resources.__TimerPool.numpotentialconnleak-starttime =
1559823838735 server.resources.__TimerPool.numpotentialconnleak-unit =
count server.resources.__TimerPool.waitqueuelength-count = 0
server.resources.__TimerPool.waitqueuelength-description = Number of
connection requests in the queue waiting to be serviced.
server.resources.__TimerPool.waitqueuelength-lastsampletime = -1
server.resources.__TimerPool.waitqueuelength-name = WaitQueueLength
server.resources.__TimerPool.waitqueuelength-starttime = 1559823838735
server.resources.__TimerPool.waitqueuelength-unit = count
Command get executed successfully.
It's important to add the .* at the end of the asadmin command in asadmin get --monitor server.resources.__TimerPool.*
If you neglect that and just enter asadmin get --monitor server.resources.__TimerPool it'll return
No monitoring data to report.
Command get executed successfully.
To see thelist of resources you have available to you to monitor type /asadmin list --monitor server.resources.*

How do I make a local variable become the value of another local variable in a different script

I am making a jumping game and I want the JumpBoost value on the leader board to be the Value of the JumpBoost of the object
local boostPart = script.Parent
local jump = leaderstats.gold.Value
local boostedJumpPower = jump.Value
local function onPartTouch(otherPart)
local partParent = otherPart.Parent
local humanoid = partParent:FindFirstChildWhichIsA("Humanoid")
if ( humanoid ) then
boostPart.CanCollide = false
local currentJumpPower = humanoid.JumpPower
if ( currentJumpPower < boostedJumpPower ) then
humanoid.JumpPower = boostedJumpPower
wait(5)
humanoid.JumpPower = currentJumpPower
end
end
end
boostPart.Touched:Connect(onPartTouch)
I tried this but It didn't work and I cant think of another way
If I test the game and jump after touching the object my Jump is normal not the leader stats amount
I think your problem is that you are simply not accessing the leaderstats value properly.
There is a difference between the Player you find in game.Players and the one in game.Workspace. The one in game.Players is where the leaderstats folder is, and the one in game.Workspace is where the player's humanoid lives.
In order to alter the leaderstats values, you need to use the humanoid to know which player touched the part, but then get the actual Player object from game.Players to properly access the folder.
local boostPart = script.Parent
local function onPartTouch(otherPart)
-- get the player that touched the boost block
local partParent = otherPart.Parent
local humanoid = partParent:FindFirstChildWhichIsA("Humanoid")
if ( humanoid ) then
-- a player touched this, prevent more collisions
boostPart.CanCollide = false
-- get the current player's jump power by looking into the players leaderstats
local playerService = game.Players
local playerName = partParent.Name
local currentPlayer = playerService[playerName]
local playerGold = currentPlayer.leaderstats.gold.Value -- << gold?
-- give the player a temporary boost
local currentJumpPower = humanoid.JumpPower
if ( currentJumpPower < playerGold) then
humanoid.JumpPower = playerGold
-- reset the player's jump power
wait(5)
humanoid.JumpPower = currentJumpPower
end
end
end
boostPart.Touched:Connect(onPartTouch)
This should allow you to properly read from the leaderstats values. If you want to alter them, you can use the same system for navigating into the Players folder and changing the values.
To answer your title question, "How do I make a local variable become the value of a local variable in another script", Roblox has a system for passing data using BindableEvents and BindableFunctions.
A simple example of this assumes you have a BindableEvent in the workspace.
Local Script 1:
local myValue = 1
local bEvent = game.Workspace.BindableEvent
-- listen for actions fired from other scripts
bEvent.Event:Connect(function(newValue)
myValue = newValue
print("Script 1 - myValue has changed to ", myValue)
end
print("Script 1 - myValue originally equals ", myValue)
Local Script 2 :
wait(1)
local bEvent = game.Workspace.BindableEvent
-- tell the other scripts that the value has changed
print("Script 2 - firing change signal")
bEvent:Fire(10)
This kind of system is useful if you have a lot of different moving parts and you want a centralized way for communicating changes, and don't want to build a custom data passing system.

Azure database copy, wait for ready state

I am making new Azure databases now and then and without any human interaction, by using a template database and azure copy database t-sql script.
After reading a few caveats I wonder what is the best way to wait for the copied database to be ready.
I have posted my own answer but it could end up in an infinite loop if the database copy fails.
What is the best way to be sure that the copying is done. This is very important as you are only allowed one copy database command at a time. I'm using an Azure worker with a c# queue that creates the databases as nessesary.
This is how I'm doing it now. This should work but unfortunatly errors when copying are rare.
declare
#sName varchar(max),
#sState varchar(max),
#iState int,
#sDbName varchar(50)
set #sDbName = 'myDbName'
set #iState = 7 --copying
while #iState = 7
begin
WAITFOR DELAY '00:00:05'
SELECT #sName = name, #iState = state, #sState = state_desc FROM sys.databases WHERE name = #sDbName
end
SELECT name, state,state_desc FROM sys.databases WHERE name = #sDbName
This code is actually run from a Azure worker in c#
Edit:
If you want more granular approch use sys.dm_database_copies
But otherwise remember that this sql returns the state as integer, and its only successfull if the state ==0 Othervise the copy has failed.
the states are:
0 = ONLINE
1 = RESTORING
2 = RECOVERING
3 = RECOVERY_PENDING
4 = SUSPECT
5 = EMERGENCY
6 = OFFLINE
7 = COPYING (Applies to Windows Azure SQL Database)
Windows Azure SQL Database returns the states 0, 1, 4, and 7.