What causes "Value cannot be negative" in cloudwatch agent logs? - amazon-cloudwatch

I see the following log entries in /var/log/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.log
2021-10-06T13:18:00Z W! Value cannot be negative: -1
2021-10-06T13:18:00Z W! Value cannot be negative: -1
2021-10-06T13:18:00Z W! Value cannot be negative: -1
Is there anyway to find out what is the source of this -1 value?

I don't think there is a good way to find the actual source in the general case. That's why I opened an issue in aws/amazon-cloudwatch-agent.
But there are some guesses that you can do:
First, check if you are using collectd GenericJMX to collect the NonHeapMemoryUsage attribute from java.lang:type=Memory MBean, that is part of the usual collectd's GenericJMX example configuration. The NonHeapMemoryUsage is a composite attribute with committed,init, max and used. And max is always -1.
You can verify this using jmx-term cli utility:
java -jar jmxterm-1.0.2-uber.jar
$>open localhost:8993
$>get -d java.lang -b java.lang:type=Memory NonHeapMemoryUsage
#mbean = java.lang:type=Memory:
NonHeapMemoryUsage = {
committed = 312860672;
init = 7667712;
max = -1;
used = 304634560;
};
If this is the case you can use collectd Chains functionality to "drop" the TypeInstance=nonheap-max, by adding the following to your collectd.conf:
<Chain "PostCache">
# only show system and user
<Rule "jmx_memory">
<Match "regex">
Plugin "GenericJMX"
Type "jmx_memory"
</Match>
<Match "regex">
TypeInstance "nonheap-max"
Invert true
</Match>
# If a jmx_memory metric and it's not the nonheap-max then send it to the amazon-cloudwatch-agent 25826
<Target "write">
Plugin network
</Target>
</Rule>
# Default target drop all metrics
Target "stop"
</Chain>
Second, if it not the NonHeapMemoryUsage is likely to be any other collectd metric, you can drop all metrics with negative values with collectd's <Match "Value"> although I don't recommend it:
<Chain "PostCache">
# only show system and user
<Rule "drop negative">
<Match "value">
Min 0
Satisfy "All"
</Match>
# send positive values to amazon-cloudwatch-agent 25826
<Target "write">
Plugin network
</Target>
</Rule>
# Default target drop all metrics
Target "stop"
</Chain>

Related

How to change the max size for file upload on AOLServer/CentOS 6?

We have a portal for our customers that allow them to start new projects directly on our platform. The problem is that we cannot upload documents bigger than 10MO.
Every time I try to upload a file bigger than 10Mo, I have a "The connection was reset" error. After some research it seems that I need to change the max size for uploads but I don't know where to do it.
I'm on CentOS 6.4/RedHat with AOL Server.
Language: TCL.
Anyone has an idea on how to do it?
EDIT
In the end I could solve the problem with the command ns_limits set default -maxupload 500000000.
In your config.tcl, add the following line to the nssock module section:
set max_file_upload_mb 25
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param maxinput [expr {$max_file_upload_mb * 1024 * 1024}]
# ...
It is also advised to constrain the upload times, by setting:
set max_file_upload_min 5
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param recvwait [expr {$max_file_upload_min * 60}]
If running on top of nsopenssl, you will have to set those configuration values (maxinput, recvwait) in a different section.
I see that you are running Project Open. As well as setting the maxinput value for AOLserver, as described by mrcalvin, you also need to set 2 parameters in the Site Map:
Attachments package: parameter "MaximumFileSize"
File Storage package: parameter "MaximumFileSize"
These should be set to values in bytes, but not larger than the maxinput value for AOLserver. See the Project Open documentation for more info.
In the case where you are running Project Open using a reverse proxy, check the documentation here for Pound and here for Nginx. Most likely you will need to set a larger file upload limit there too.

How could i get pts number(/dev/pts/[X]) for virtio-serial with libvirt??

Now i'm QEMU+KVM VM service with libvirt.
My VM console's serial option is like below.
<devices>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<console type='pty'>
<target type='virtio' port='1'/>
</console>
</devices>
Because I'm not specify pts number (ex ), pty number is allocated randomly depends on host status. I can get serial0's pty number with "virsh ttyconsole" command easily. but where can i get virtio1's pty number ??
I want to use virtio-serial's pty number allocated by libvirt. Thanks.
The /dev/pts/XXX path will be recorded in the XML document when the guest is running. The virsh ttyconsole command merely reads it from the XML.

Hue: oozie parameters

I want to pass 2 parameters to my Hiveql script in oozie,
my script:
ALTER TABLE default.otarie_appsession
ADD IF NOT EXISTS PARTITION ( insert_date=${dt},hr=${hr} );
My Oozie workflow :
When i send the job it ask for parameter values, so i put:
And this is the error:
2016-02-05 18:41:55,460 WARN org.apache.oozie.action.hadoop.HiveActionExecutor: SERVER[DVS1VM65] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000290-160122145737153-oozie-oozi-W] ACTION[0000290-160122145737153-oozie-oozi-W#hive-a586] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [40000]
This is the XML of the workflow:
<workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5">
<start to="hive-a586"/>
<kill name="Kill">
<message>L'action a échoué, message d'erreur[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="hive-a586">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>/user/oozie/some_scripts/hive-site.xml</job-xml>
<script>/user/oozie/some_scripts/addpart.hql</script>
<param>hr=</param>
<param>dt=</param>
</hive>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
When i remove all parmeters and use hard coded value, the script works fine, so its clear that i have problem to pass parameters. And my final goal is to pass current date and hour.
Thank you.
What about setting Hive parameters with Oozie parameters...
<param>hr=${bilouteHR}</param>
<param>dt=${bilouteDT}</param>
...then setting values for these Oozie parameters at submission time?
bilouteHR
00
bilouteDT
20160105
Hope that solves your issue, biloute.

Having trouble with Json formatting when usign Fluentd to send logs to BigQuery

I am using Fluentd plugin to send logs to BigQuery, but the BQ plugin seems to change the ":" to "=>" when sendign the data to BigQuery.
The logs arrive in BigQuery with key=>value instead of key:value formatting.
I have the following match definitions in td-agent.conf
<match bq.*.*>
type copy
deep_copy true
<store>
type bigquery
auth_method json_key
json_key /home/fereshteh/keys/LL-POC-9081311ba6a0.json
project my-poc
dataset MY_POC
table LogMessage
auto_create_table true
field_string body,header
# buffer_chunk_limit
# buffer_chunk_records_limit 300
buffer_queue_limit 10240
num_threads 16
# flush_interval 1
buffer_type file
buffer_path /var/log/td-agent/buffer/bq
</store>
<store>
type file
path /var/log/td-agent/bq-logtextmsg.log
</store>
</match>
Using "copy" feature, I was able to verify the source part is working correctly and that the copied logs do show the correct formatting of the json logs, key:value.
However, in BigQuery they show up as key=>value.
Any suggestions on how to change that to use ":"?
BigQuery json_extract functions doesn't like "=>" and expects ":"s.
Thanks.

ANT sql task: How to run SQL and PL/SQL and notice execution failure?

to execute an .sql script file from ANT it works fine using the following task:
<sql
classpath="${oracle.jar}" driver="oracle.jdbc.OracleDriver"
url="jdbc:oracle:thin:###{db.hostname}:#{db.port}:#{db.sid}"
userid="#{db.user}"
password="#{db.password}"
src="#{db.sql.script}" />
But if the .sql file not only contains pure SQL but also PL/SQL the task will fail. This could be solved by using the following snippet:
<sql
classpath="${oracle.jar}" driver="oracle.jdbc.OracleDriver"
url="jdbc:oracle:thin:###{db.hostname}:#{db.port}:#{db.sid}"
userid="#{db.user}"
password="#{db.password}"
delimiter="/"
delimitertype="row"
src="#{db.sql.script}" />
But if my script contains both SQL and PL/SQL then neither ANT task will work. Another solution would be to use the "exec" task with "sqlplus":
<exec executable="sqlplus" failonerror="true" errorproperty="exit.status">
<arg value="${db.user}/${db.password}#${db.hostname}:${db.port}/${db.sid}"/>
<arg value="#${db.sql.script}"/>
</exec>
But unfortunately this task will never fail, hence the build returns always with "SUCCESSFUL" even though the sql script execution failed. The error property which I tried to set would not return any error code.
Any ideas/suggestions how to solve this problem?
Thanks,
Peter
Peter,
Add at the beginning of scripts
WHENEVER SQLERROR EXIT SQL.CODE;
Then sqlplus will exit with exit code != 0.
Pretty late, I guess - but I hope this will help someone:
In general, I think we should perfer using sql rather than exec executable="sqlplus" for many reasons, such as: in case we change DB provider, you don't spend reaources in a new process with sql, "STOPPING" will work as opposed to sqlplus.exe etc.
Anyway, here's a suggestion on how to let both PL/SQL & SQL in same script so that it will work:
myScript.sql:
<copy todir="...">
<fileset dir="...." includes="myScript.sql"/>
<filterchain>
<replaceregex byline="false" pattern=";" replace="{line.separator}/" flags="mg"/>
<replaceregex byline="false" pattern="/[\s]*/" replace=";${line.separator}/" flags="mg"/>
</filterchain>
</copy>
then give the result to:
<sql delimeter="/" src="myScript.sql"/>
explaination:
If you have regular sql commands:
drop table x;
select blah from blue where bli=blee;
They will be transformed to:
drop table x
/
select blah from blue where bli=blee
/
which is equivlant - and the sql command with "/" delimeter can handle them.
On the other hand,
BEGIN
blah
END;
/
will be transformed to:
BEGIN
blas
END/
/
and using the second regex - transformed back to
BEGIN
blas
END;
/
So everybody wins! Hurray!
Good luck.