I have set up Azure Monitor custom log collection on my Linux VM by following the tutorial and all works fine, except that the Computer Name column in my custom table does not get populated. This means I have no easy way to distinguish between similar logs sourced from multiple VMs.
I could probably hack in the hostname into the log file itself and get Azure to parse it into a field, but on one hand I don't want to customize the log file if possible, I believe the agent should be capable of propagating this information somehow.
Is there anything that needs to be configured outside of the tutorial, or is it a current limitation of the Azure Monitor Agent?
Fixed in 2023 Feb by Microsoft: https://learn.microsoft.com/en-us/answers/questions/951629/custom-logs-hostname-field-azure-monitoring-agent
Related
I've a situation where my central MySQL db and file system (S3) runs on a EC2.
But one of my application runs locally at my client site on a PI-3 device, which needs to look up data and files from both the DB and file system on cloud. The application generates transactional records in turn and need to upload the DB and FS (may be at day end).
The irony is that sometimes the cloud may not be available due to connectivity issues (being in a remote area).
What could be the best strategies to accommodate this kind of a scenario?
Can AWS Greengrass help in here?
How to keep the Lookup data (DB and FS)in sync with the local devices?
How to update/sync the transactional data generated by the local devices?
And finally, what could be the risks in such a deployment model?
Appreciate some help/suggestions.
How to keep the Lookup data (DB and FS)in sync with the local devices?
You can have a Greengrass Group and includes all of the devices in the that group. Make the devices subscribe to a topic e.g. DB/Cloud/update. Once device received the message on that topic, trigger a on-demand lambda to download the latest information from the Cloud. To make sure the device do not miss any update when offline, you can use persistent session, it will make sure device will receive all the missing message when it is back online.
How to update/sync the transactional data generated by the local devices?
You may try with the Stream Manager. https://docs.aws.amazon.com/greengrass/latest/developerguide/stream-manager.html
Right now, it is allowed you to add a local use lambda to pre-process the data and sync it up with the cloud
I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.
Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.
I need to customize the Perforce server to achieve the following requirements:
I need a local replica server which gets synced with the main server in a different geographical location. I can have the same time zone settings for the local and main servers
The client should be able to commit to the replica server.
The replica server will have build capability as well as a test frame work that is run whenever a build is succesfull.
Once the build and test is succesfull the code should get committed to main server.
I know that the replica server provided by perforce is used as a readonly server which can't write to main server and the forwarding replica just forwards the commands to main server.
I can't use proxy server, as the local server should work even when the main server is offline.
Is it possible to do this? Can anyone point me to some articles which would help me to set up such a server
I had asked the same question in the Perforce forum, but the question is still under verification by moderators.
An edge/commit setup may meet your requirements, as an Edge Server handles some local operations associated with workspaces and work in progress.
As well as read-only commands, the following operations can be performed on an Edge Server:
syncing, checking out, merging, resolving, and reverting files
More information about edge/commit archetecture is available here:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.distributed.html
You may also want to look at BuildFarm servers:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.replication.html#DB5-72814
Hope this helps,
Jen!
Build Server doesn't allow build work spaces to submit files. If submitting files is required as part of the build process, consider the use of an edge server to support your automated build processes.
With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server.
Edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.
I am looking for a way to enumerate through the Virtual Directories (Windows Server 2003) in an App Pool and get diagnostic data (specifically WorkingSet, Private Bytes, and Virtual Bytes).
I've found plenty on how to enumerate through a server's App Pools, and getting the Virtual Directories within, but what do I need to do in order to obtain diagnostic data?
Basically I want to add a script that grabs this data for a monitoring app (NAGIOS). We have a script that already grabs the top 2 running worker processes on the server, but we don't know what app pool they belong to.
Thanks.
As you've discovered, it's a two-step process: you need to look up resource utilization for every worker process, and you also need to know which app pool corresponds to each worker process.
You've already figured out the first part. Here's how to do the other part: in Windows Server 2003, there's a command-line script available in Windows Server 2003 called iisapp.vbs. See the documentation for more details. The output from this command-line tool will look like this:
W3wp.exe PID: 2232 AppPoolID: DefaultAppPool
W3wp.exe PID: 2608 AppPoolID: MyAppPool
Simply parse the output from this script and you'll be able to tie process IDs to App Pools. Then look up each process by ID or filter your existing list of enumerated processes to find the matching Process ID.
There may be additional restrictions too around security and specific IIS configuration needed. See the documentation link above.
Note that Windows Server 2008 uses a different command, appcmd list wp, and it has different output format, so this solution is specific to Windows Server 2003.