Azure monitor text log using azure monitor agent - azure-log-analytics

I'm trying to set up log collection of a text based log file.
i'm able to retrieve the log file when nsgs are disabled, so my collection rules, tables and endpoints must be correct ( i think).
when activating nsg on the subnet where the vm (windows) is located, im allowing AzureMonitor and Storage servicetags both outbound and inbound (any port).
i do have a deny-all outbound rule.
i've tried to disable nsgs, this works, i'm guessing its the deny all outbound that is causing the issue, but i would expect the service tags with lower priority should supersede this rule. i would like to keep the deny all out rule.
Any tips?

Related

Writing to S3 from remote instance

I would like to write files from a remote machine to Amazon S3. The machine I am working restricts outbound connections unless specified. I can have an ip whitelisted but from my understanding S3 uses a pool of addresses and they are not fixed. Not sure what my options are. Anything helps.
Thank you
Option 1 :
Aws actually list the range of allowed ip's per each service
References :
1. https://ip-ranges.amazonaws.com/ip-ranges.json
2. https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
You can write a script to download this range of ip's and automate the process of updating your security group accordingly
option 2:
If the remote resource (ec2-instance) you are using is also in AWS
Then you can create a new role (which will allow access to s3 operations ) and attach that role to your remote instance
Have not checked this option when we have restriction on outbound connection , but could be a better option if it works

Q: Tibco BW Instance Specific variable in tra file

I am a Tibco Administrator and one of our developers is requesting for me to make a change to the tra file for a property called bw.platform.services.retreiveresources.Hostname. I see the property defined in at the par level but since it is a hostname they need it to be defined for each Process archive we have two server instances.
The reason I don't want to make the change in the tra file is because when we deploy; the changes get lost. We have many apps and it will be a nightmare to keep track of all these changes in the tra files every time we have a deployment.
Since I am not a develper and could you please tell me in simple terms how this can be done in Tibco without modifying the tra file. So I can pass along the info to the develpers.
Thank You
This property is used to set the hostname for WSDL client retrieval and defaults to "localhost" as far as i know.
It doesn't need to be set on TRA level but could also be set in TIBCO Administrator on the Process Archive Level.
As you mentioned there are 2 instances hosted for each services (i assume on 2 different hosts).
So by that i would assume there's a load balancer configured in front that (as i would consider a good practice) would rewrite the hostnames in the wsdl by a rule.
Depending on your configuration of the bw engine boxes i would leave it to 0.0.0.0 or localhost and let the load balancer rewrite the WSDL IP to clients to not directly expose it there.
Hope that helps
Seb
This is used to retrieve WSDL instead of retrieve process

How to block specific IPs in apache?

I am having a java based application running in tomcat. It is an online app, the request first goes to apache and then redirects to tomcat.
Today I was not able to log into my application and I noticed warnings at catalina.out file. They said "An attempt was made to authenticate the locked user "root" "and "An attempt was made to authenticate the locked user "manager" "
In my localhost_access_log.2015-07-07.txt I found the below IP addresses trying to access the system.
83.110.99.198
117.21.173.36
I need to block these 2 IPS from accessing my system. The first IP is a well known blacklisted according to the anti-hacker-alliance. How can I do this thing?
FYI I am using apache 2, so the main configuration file is apache2.conf
(Please don't remove the IP addreses I listed above, as I need other developers to be aware of the threat as well)
If you're using VPC:
The best way to block traffic from particular IPs to your resources is using NACLs (Network Access Control Lists).
Do a DENY for All protocols INGRESS for these IPs. This is better than doing it on the server itself as it means traffic from these IPs will never even get as far as your instances. They will be blocked by your VPC.
NACLs are on the subnet level, so you'll need to identify the subnet your instance is in and then find the correct NACL. You can do all of this using the VPC Dashboard on the AWS console.
This section of the documentation will help you:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
Note that you will need to give the rule numbers for these 2 rules to block these 2 IPs a rule number that is less than the default rule number (100). Use 50 and 51 for example.
You can use an .htaccess file:
Order Deny,Allow
Deny from 83.110.99.198
Deny from 117.21.173.36
It's probably better to add this as a firewall rule though. are you using any firewall service now?

Using Mule how to pull a file from an FTP site in response to an incoming VM event?

When I get a triggering event on an inbound VM queue I want to pull a file from an FTP site.
The problem is the flow needs to be triggered by the incoming VM message not the FTP file's availability.
I cannot figure out how to have what is essentially two inputs. I considered using a content enricher but it seems to call an outbound endpoint. The Composite Source can have more than one input, but it runs when any of the message sources trigger it, not a sum of sources.
I am setting up an early alert resource monitoring of FTP, file systems, databases, clock skew, trading partner availability, etc. Periodically I would like to read a custom configuration file that tells what to check and where to do it and send a specific request to other flows.
Some connectors like File and FTP do not lend themselves to be triggered by an outside event. The database will allow me to select on the fly but there is no analog for File and FTP.
It could be that I am just thinking about it in the wrong light but I am a little stumped. I tried having the VM event trigger a script that starts a flow that had an initial state of “stopped” and that flow pulls from an FTP site but VM seems to not play well with starting and stopping flows, and it begins to feel like a 'cluttered' solution.
Thank you,
- Don
For this kind of scenarios, you should use the Mule requester module.

Using mod_proxy_ajp, how do I set "special" AJP attributes?

I have set up an Apache Web Server 2.4 to act as a proxy for Apache Tomcat 7, communicating via the AJP protocol (mod_proxy_ajp on the Apache side and an AJP connector on the Tomcat side). Everything works great for basic functionality.
Now, I am looking to set some specific AJP attributes, but can't quite get it to work...
Looking at the mod_proxy_ajp page (http://httpd.apache.org/docs/current/mod/mod_proxy_ajp.html), under the Request Packet Structure section, I see a listing of attributes. These attributes include the likes of remote_user, and ssl_cert (code values 0x03 and 0x07, respectively). There is also an "everything else" attribute called req_attribute with code value 0x0A that can be used to set any arbitrary attribute in an AJP request.
Further, on the same page, under the Environment Variables section, it states the following:
Environment variables whose names have the prefix AJP_ are forwarded to the origin server as AJP request attributes (with the AJP_ prefix removed from the name of the key).
This seems straightforward enough, and indeed, I am easily able to set an arbitrary AJP attribute such as "doug-attribute" by setting an Apache environment variable called "AJP_doug-attribute", and assigning a relevant value. After doing such, I can analyze the traffic using Wireshark, and see the "doug-attribute" field show up in the dissected AJP block, prefixed with a hex value of 0x0A (the "req_attribute" type listed above). So far so good.
Now I want to try to set the ssl_cert attribute. In the same fashion, I set an environment variable called "AJP_ssl_cert". Doing so, it does show up in Wireshark, but with prefix code "0x0A". Further, my Java application that wants to read the "javax.servlet.request.x509certificate" does not find the certificate.
However, I also notice some other attributes in the Wireshark capture that are listed on the website, like ssl_cipher and ssl_key_size. But in the capture, they show up as "SSL-Cipher" and "SSL-Key-Size" (and have the appropriate "0x08" and "0x0B" prefix codes). So, I try setting the cert attribute again, this time as "SSL-Cert", but I get the same results as before.
To compare, I altered the Apache configuration to require client certificates, and then provided one in the browser when visiting the associated web page. At this point, I look at the Wireshark capture, and sure enough, there is now an attribute named "SSL-Cert", with code "0x07", and my web application in Tomcat is successfully able to find the certificate.
Is there any way that I can manually set the attributes listed on the mod_proxy_ajp page, or does the module handle them differently from other arbitrary request attributes (like "doug-attribute")? I feel like there must be something I am missing here.
As some background, the reason that I am trying to do this is that I have a configuration with multiple Apache web servers proxying each other, and then at the end, an Apache web server proxying to a Tomcat instance via AJP. All the Apache web servers use SSL and require client certificates. With just one Apache server, Tomcat can receive the user's certificate just fine without any special configuration on my part. However, with multiple, it ultimately receives the server certificate of the previous Apache proxy (set on that machine using the SSLProxyMachineCertificateFile directive).
What I am hoping to do is place the original user's certificate into the headers of the intermediate proxied requests, and then manually set that certificate in the AJP attributes at the back end so that the web application in Tomcat can read the certificate and use it to perform its authorization stuff (this part is already set in stone, otherwise I would just add the certificate as a header and make the Java app read the header).
EDIT: of course, if there is an easier way to accomplish passing the user's certificate through the proxy chain, I'd be more than happy to hear it! :)