HI please help me for my below question,
I have installed cloudwatch agent on my AWs instance by following these documents link:-
https://www.sentinelone.com/blog/ec2-memory-usage/#:~:text=Simply%20go%20to%20the%20CloudWatch,memory%20usage%20in%20a%20graph
agent is installed it is showing by terminal and also it is running But i am not able to see CWagent in all metrix.can anyone help what is the issue.
enter image description here
Related
I have set up my monitoring agent vm and I also added the vm to the sql preview in Monitor section. However I am getting this error below and I am not sure how to actually upgrade the WLI Versin. Could anyone help please thank you.
enter image description here
I apologize in advance in asking this question. It must be something very silly that I am overlooking. I am a beginner to GCP. When I try to create a job using the GUI and google pubsub to bigquery template, I get the following error:
The workflow could not be created. Causes: (717932ea69118a95): Unable to get machine type information for machine type n1-standard-4 in zone us-central1-a because of insufficient permissions. Please refer to https://cloud.google.com/dataflow/access-control#creating_jobs and make sure you have sufficient permissions.
I went to the IAM and checked that I already am the owner of the project. Can someone please guide me?
Thanks
We faced the similar issue.The root cause we found was the dataflow service account was missing in the IAM.
You should find a service account similar to below:
service-xxxxxxxxxxxx#<<project_name>>.iam.gserviceaccount.com
If you dont find this, try disabling the Dataflow API and re enable it.
You need roles/compute.viewer role for SA
We solved this by making the user dataflow.admin in the IAM console. The link provided in the error message has more granular permissions you can add if you don't want your data flow developers to be full admin.
I am currently working on snappy-data sql query functionality,Snappy-data support for Apache zeppelin ,We can do all the functionality using Apache zeppelin connecting to the snappy-data , Then i configured all the setup as per the snappy-data documentation snappy-data with zeppelin link but after finishing this i can run the using spark interpreter with snappy data Scala programming,at the same time i can't able to run the job on using snappy-data interpreter, its through on "org.apache.thrift.transport.TTransportException" even i checked my logs also same error only appear on the log file. could you tell me anyone knows what was the issue on it. if know anything about the issue please share me, Thanks advanced.
I set up a Neo4j database on Azure following this guide. The set-up process went fine. The issue I'm having is that the database is not asking for a username or password when I access it though the public port. In other words, anyone can access and edit the database by simply navigating to the URL. Can anyone point me in the right direction as to how to set up authentication?
First: That's a fairly old walkthrough, with the v1.8 version of Neo4j running on the preview of Virtual Machines. And that image had a pre-set username and password. Look closely at the login box:
"The server says neo4j graphdb"
Those two will be your username and password.
Note: This is not the case if you use the latest 2.0x image in VM Depot.
I was able to get this working by modifying the /conf/neo4j-server.properties file and following the instructions at the github repo.
# Basic Auth-Filter-Extension
# See docs here: https://github.com/neo4j-contrib/authentication-extension
org.neo4j.server.credentials=your_user_name:your_password
org.neo4j.server.thirdparty_jaxrs_classes=org.neo4j.server.extension.auth=/auth
I'm working on Java Pos and I'm a newbie. I need (kettle) Pentaho Data Integration in order to integrate the Java POS' database with the database in the ERP. I followed the following manual
"http://www.scribd.com/doc/19583351/Install-Guide-for-Pentaho-Business-Intelligence-BI-Suite-CE"
and I'm stuck at Part 3- Step 1. When I type localhost address in the browser, instead of getting pentaho login page i'm getting a "HTTP Status 404" error.
Do I've to change the tomcat server port or anything else? Please, help me find out the glitch in this program?
Check your server.xml to see what port is listening on. I assume when you started tomcat it started successfully? ( check the log for errors )
Use google.
Finally, if you want to use ETL/Kettle then you need to start off looking at the Spoon tool - this is the UI for building ETL. So look at that first perhaps.
(you dotn even need the BI server if all you're doing is ETL.)