I have deployed a Splunk stand-alone server(also act as a deployment server) with docker and installed a forwarder on my PC, the forwarder management shows that the forwarder has connected to Splunk server. Then I tried to modify input.conf as below on Splunk server
[monitor://D:\git_web_test1\logs]
disabled=false
index=applogs
sourcetype=applogs
whitelist=*
I run splunk reload deploy-server then I can see the logs has pushed to the Splunk server,
however, I found it was pushed to the wrong index(main) and unexpected source type:
22/07/22 13:42:40.091
[2022-07-22T21:42:40.091] [INFO] default - server start at 8080.
host = DESKTOP-**** = D:\git_web_test1\logs\appsourcetype = app-too_small
I have never created this sourcetype before, do you know why this will happend?
The "-too_small" suffix is added to a sourcetype name when the sourcetype is undefined and the source does not contain enough data for Splunk to guess about the sourcetype's settings. A sourcetype is undefined if there is no props.conf entry for it on the indexer(s).
The fix is to create a sourcetype stanza in $SPLUNK_HOME/etc/system/local/props.conf on the Splunk server. It should look something like this:
[applogs]
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N
MAX_TIMESTAMP_LOOKAHEAD = 23
LINE_BREAKER = ([\r\n]+)
SHOULD_LINEMERGE = false
TRUNCATE = 10000
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+)
The most likely reason why the logs are in the wrong index is the specified index doesn't exist. It's not enough to put index=applogs in inputs.conf. The same index name must be present in indexes.conf on the indexer(s). On a standalone server the index can be created in the UI at Settings->Indexes.
Related
I am using a jenkins to trigger the lambda , which creates an AMI image from an ec2 instance, and then creates a launch configuration , and updates the auto scaling group with the new launch configuration
, and let the auto scaling group creates the instances using the latest Launch configuration and terminates the older instances.
But my code some time runs properly but some time it give me "ClientError: An error occurred (InvalidAMIName.Duplicate) when calling the CreateImage operation: AMI name "API_AMI_200220_1629" is already in use by AMI ami-033a3681473f9acbd"
but my AMI names are dynamically created like "API_AMI_$(date +%d%m%y_%H%M)", so there will not be any duplicate AMIs that can be created technically. But I am getting this error and the AMI will be in a Pending state. can any one have any suggestion or a solution why it is happening only sometimes and not on all times.Please check the below codescript.
import json
import boto3
import time
def lambda_handler(event, context):
flag_image=1
instance_id=event['instanceId']
ami_name=event['amiName']
launch_config=event['launchConfig']
autoscaling_name=event['autoscalingName']
ec2_client = boto3.client('ec2',region_name='us-east-1')
autoscaling_client = boto3.client('autoscaling',region_name='us-east-1')
print ##############Creating Image########################
response = ec2_client.create_image(InstanceId=instance_id, Name=ami_name)
print response
imageId=response['ImageId']
print imageId
describe_image = ec2_client.describe_images(ImageIds=[imageId])
while flag_image == 1 :
for i in describe_image['Images']:
time.sleep(5)
print i['State']
if i['State'] == 'available':
flag_image=0
describe_image = ec2_client.describe_images(ImageIds=[imageId])
Thanks in advance.
I have an Apache Camel project that is using Quartz2 as the scheduler. The requirement is to make it a cluster. The code is deployed to weblogic 12c. the quartz is configured as per many samples with clustering enabled.
This is my properties file (without the datasource)
org.quartz.scheduler.instanceName = MyScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck = true
org.quartz.scheduler.jobFactory.class = org.quartz.simpl.SimpleJobFactory
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.oracle.OracleDelegate
org.quartz.jobStore.useProperties=true
org.quartz.JobBuilder.requestRecovery=true
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
When I deploy and start both nodes I see that the QRTZ_SCHEDULER_STATE table has extra entry for one of the nodes:
MyScheduler-routerContext server_node21567108546690
MyScheduler-routerContext-1 server_node11565896495100
MyScheduler-routerContext-1 server_node11567108547295
And I am guessing because of that the one node is being called once in a while while the other node gets called all the time (so occasionally both nodes are invoked at the same time).
I have tried to do a clean restart of weblogic nodes but the issue is still there
This is how my route(s) look like:
from("quartz2://provRegGroup/createUsersTrigger?cron={{create_users_cron}}&job.name=createUsersJob")
.routeId("createUsersRB")
.log("**** starting check for create users");
//where
//create_users_cron=0+0,5,10,15,20,25,30,35,40,45,50,55+*+*+*+?
//expecting one node being called by the scheduler at a time..
I figured out what caused the issue. apparently there were orphan weblogic processes that were running on one (or even both nodes) - this would be a question to our tech archs - why this was such a mess.. ps was showing two weblogic servers running on a node - one that I started recently and one that was there for say a month..
expecting this would never happen to production environment I assume the issue has been resolved..
Does anyone know the asadmin command line equivalent to display the Resource data as shown in the image below (ie the Resource __TimerPool)?
I'm using Payara 4.1.1.171.1.
I typed asadmin monitor --help and it provided this as
monitor [--help]
--type type
[--filename filename]
[--interval interval]
[--filter filter]
instance-name
The type field only accepts "httplistener", "jvm" and "webmodule" as inputs.
So I can't use a "resource" or "jdbcpool" as a type.
Oddly enough in the old glassfish 2.1 https://docs.oracle.com/cd/E19879-01/821-0185/gelol/index.html you can select "jdbcpool" as the type
Any help is appreciated.
I couldn't really find the answer on the payara documentation https://docs.payara.fish/documentation/payara-server/monitoring-service/monitoring-service.html
But using part of the glassfish documentation https://docs.oracle.com/cd/E18930_01/html/821-2416/ghmct.html#gipzv I was able to get what I needed.
The command is asadmin get --monitor server.resources.__TimerPool.*
This then returns (this is a partial output):
server.resources.__TimerPool.numconnused-highwatermark = 2
server.resources.__TimerPool.numconnused-lastsampletime =
1559826720029 server.resources.__TimerPool.numconnused-lowwatermark =
0 server.resources.__TimerPool.numconnused-name = NumConnUsed
server.resources.__TimerPool.numconnused-starttime = 1559823838730
server.resources.__TimerPool.numconnused-unit = count
server.resources.__TimerPool.numpotentialconnleak-count = 0
server.resources.__TimerPool.numpotentialconnleak-description = Number
of potential connection leaks
server.resources.__TimerPool.numpotentialconnleak-lastsampletime = -1
server.resources.__TimerPool.numpotentialconnleak-name =
NumPotentialConnLeak
server.resources.__TimerPool.numpotentialconnleak-starttime =
1559823838735 server.resources.__TimerPool.numpotentialconnleak-unit =
count server.resources.__TimerPool.waitqueuelength-count = 0
server.resources.__TimerPool.waitqueuelength-description = Number of
connection requests in the queue waiting to be serviced.
server.resources.__TimerPool.waitqueuelength-lastsampletime = -1
server.resources.__TimerPool.waitqueuelength-name = WaitQueueLength
server.resources.__TimerPool.waitqueuelength-starttime = 1559823838735
server.resources.__TimerPool.waitqueuelength-unit = count
Command get executed successfully.
It's important to add the .* at the end of the asadmin command in asadmin get --monitor server.resources.__TimerPool.*
If you neglect that and just enter asadmin get --monitor server.resources.__TimerPool it'll return
No monitoring data to report.
Command get executed successfully.
To see thelist of resources you have available to you to monitor type /asadmin list --monitor server.resources.*
I've recently set some traces and extended events up and running in SQL on our new virtual server to show the access that users have to each database and whether they have logged in recently, and have set the file to save as a physical file on the server rather than writing to a SQL table to save resource. I've set the traces as jobs running at 8am each morning with a 12-hour delay so we can record as much information as possible.
Our IT department ideally don't want anything other than the OS on the C drive of the virtual server, so I'd like to be able to write the trace from my SQL script either to a different partition or to another server altogether.
I have attempted to insert a direct path to a different server within my code and have entered a different partition to C, however unless I write the trace/extended event files to the C drive I get an error message.
CREATE EVENT SESSION [LoginTraceTest] ON SERVER
ADD EVENT sqlserver.existing_connection(SET collect_database_name=
(1),collect_options_text=(1)
ACTION(package0.event_sequence,sqlos.task_time,sqlserver.client_pid,
sqlserver.database_id,sqlserver.
database_name,sqlserver.is_system,sqlserver.nt_username,sqlserver.request_id,sqlserver.server_principal_sid,sqlserver.session_id,sqlserver.session_nt_username,
sqlserver.sql_text,sqlserver.username)),
ADD EVENT sqlserver.login(SET collect_database_name=
(1),collect_options_text=(1)
ACTION(package0.event_sequence,sqlos.task_time,sqlserver.client_pid,sqlserver.database_id,sqlserver.database_name,sqlserver.is_system,sqlserver.nt_username,sqlserver.request_id,sqlserver.server_principal_sid,sqlserver.session_id,sqlserver.
session_nt_username,sqlserver.sql_text,sqlserver.username) )
ADD TARGET package0.asynchronous_file_target (
SET FILENAME = N'\\SERVER1\testFolder\LoginTrace.xel',
METADATAFILE = N'\\SERVER1\testFolder\LoginTrace.xem' );
The error I receive is this:
Msg 25641, Level 16, State 0, Line 6
For target, "package0.asynchronous_file_target", the parameter "filename" passed is invalid. Target parameter at index 0 is invalid
If I change it to another partition rather than a different server:
SET FILENAME = N'D:\Traces\LoginTrace\LoginTrace.xel',
METADATAFILE = N'D:\Traces\LoginTrace\LoginTrace.xem' );
SQL server states that the command completed successfully, but the file isn't written to the partition.
Any ideas please as to what I can do to write the files to another partition or server?
I'm trying to use the asadmin interface to monitor a thread-pool on GlassFish 3.1.1. I'm executing the following command:
asadmin get -m server.network.my-listener.thread-pool.*
and I'm getting data back, but most of it has lastsampletime = -1 (so the related data is zero; and is worthless).
Note: I've also tried the REST interface, which I believe asadmin delegates to, and the JMX interface. Same problem: much of the data has lastsampletime = -1.
I've already turned monitoring to HIGH for all modules. What am I missing?
It seems like redeploying my application was necessary for the monitoring to actually get values. Perhaps I interpreted the manual incorrectly but it seems to suggest that a restart/redeploy wouldn't be required:
Oracle GlassFish Server 3.1 Administration Guide
Also, it is weird that the following shows there is no monitoring data:
asadmin get -m server.thread-pools.thread-pool.http-thread-pool.*
Instead you must go through a specific network listener like:
asadmin get -m server.network.http-listener-2.thread-pool.*
It also took me by surprise that enabling thread-pool monitoring IS NOT enough to see thread pool statistics. You must also enable http-service monitoring:
asadmin enable-monitoring
asadmin set server.monitoring-service.module-monitoring-levels.thread-pool=HIGH
asadmin set server.monitoring-service.module-monitoring-levels.http-service=HIGH
That's all you should need to do.
Enable monitoring, set to HIGH, for the http-service module on the DAS, stand-alone instance, or cluster you want to monitor.
Deploy an app to the DAS, stand-alone instance, or cluster and make http-requests.
asadmin get -m *instancename*.network.*listener*.thread-pool.*
Looks like you are monitoring DAS, since you are using asadmin get -m server.network.my-listener.thread-pool.*.
I deployed a simple war to DAS and made a bunch of http requests. I see the corethreads-count and maxthreads-count have last sample time as -1. And the remaining statistics have actual last sample times.
asadmin get -m "server.network.http-listener-1.thread-pool.*"
server.network.http-listener-1.thread-pool.corethreads-count = 0
server.network.http-listener-1.thread-pool.corethreads-description = Core number of threads in the thread pool
server.network.http-listener-1.thread-pool.corethreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.corethreads-name = CoreThreads
server.network.http-listener-1.thread-pool.corethreads-starttime = 1320764890444
server.network.http-listener-1.thread-pool.corethreads-unit = count
server.network.http-listener-1.thread-pool.currentthreadcount-count = 5
server.network.http-listener-1.thread-pool.currentthreadcount-description = Provides the number of request processing threads currently in the listener thread pool
server.network.http-listener-1.thread-pool.currentthreadcount-lastsampletime = 1320765351708
server.network.http-listener-1.thread-pool.currentthreadcount-name = CurrentThreadCount
server.network.http-listener-1.thread-pool.currentthreadcount-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadcount-unit = count
server.network.http-listener-1.thread-pool.currentthreadsbusy-count = 0
server.network.http-listener-1.thread-pool.currentthreadsbusy-description = Provides the number of request processing threads currently in use in the listener thread pool serving requests
server.network.http-listener-1.thread-pool.currentthreadsbusy-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.currentthreadsbusy-name = CurrentThreadsBusy
server.network.http-listener-1.thread-pool.currentthreadsbusy-starttime = 1320764890445
server.network.http-listener-1.thread-pool.currentthreadsbusy-unit = count
server.network.http-listener-1.thread-pool.dotted-name = server.network.http-listener-1.thread-pool
server.network.http-listener-1.thread-pool.maxthreads-count = 0
server.network.http-listener-1.thread-pool.maxthreads-description = Maximum number of threads allowed in the thread pool
server.network.http-listener-1.thread-pool.maxthreads-lastsampletime = -1
server.network.http-listener-1.thread-pool.maxthreads-name = MaxThreads
server.network.http-listener-1.thread-pool.maxthreads-starttime = 1320764890443
server.network.http-listener-1.thread-pool.maxthreads-unit = count
server.network.http-listener-1.thread-pool.totalexecutedtasks-count = 31
server.network.http-listener-1.thread-pool.totalexecutedtasks-description = Provides the total number of tasks, which were executed by the thread pool
server.network.http-listener-1.thread-pool.totalexecutedtasks-lastsampletime = 1320765772814
server.network.http-listener-1.thread-pool.totalexecutedtasks-name = TotalExecutedTasksCount
server.network.http-listener-1.thread-pool.totalexecutedtasks-starttime = 1320764890444
server.network.http-listener-1.thread-pool.totalexecutedtasks-unit = count
Command get executed successfully.
To instantly enable monitoring without restart use enable-monitoring command
enable-monitoring
enable-monitoring --modules jvm=LOW
enable-monitoring --modules thread-pool=HIGH
enable-monitoring --modules http-service=HIGH
enable-monitoring --modules jdbc-connection-pool=HIGH
The trick is that thread-pool and http-service modules must have high level to get monitoring info.
For more info refer https://docs.oracle.com/cd/E26576_01/doc.312/e24928/monitoring.htm#GSADG00558