How can I monitor the authentication logs on the Google cloud platform?
For example, to check if someone has tried to make unauthorized access.
With Admin Activity audit logs you will be able to answer the questions of "who did what, where, and when?" within your Google Cloud resources. It provides the following audit logs for each Cloud project, folder, and organization:
Admin Activity audit logs
Data Access audit logs
System Event audit logs
Policy Denied audit logs
You can obtain more information on Cloud Audit Logs, It will be useful to see all the events that happen into your projects, but it might not be useful for the information you want to see.
Nevertheless, there is a tool Event Threat Detection that uses log data from inside your systems and when a threat is detected, Event Threat Detection writes a Finding to Security Command Center and to a Cloud Logging project.
For example:
Event Threat Detection detects brute force of password authentication SSH by examining syslog logs for repeated failures followed by a success.
But this feature is available only for Security Command Center Premium tier.
On the other hand you mentioned that you have some VM instances and want to prevent attacks.
I recommend you to check the following documentation: Securely connecting to VM instances
There are several methods for protecting services on VMs with external IP addresses explained in this document, including Firewalls, HTTPS and SSL, port forwarding over SSH, and SOCKS proxy over SSH.
For example, by creating firewall rules, you can restrict all traffic to a network or target machines on a given set of ports to specific source IP addresses.
Related
I want to ask a conceptional question and take advices about possible system design if possible.
The plan is basically authenticating specific Gmail users to use my serverless backend application. I'm thinking about either forwarding users directly to my VPC or I can authenticate them in my host-provider server and then after forward them to the VPC (or directly Cloud Run service?).
I'd be really glad if someone experienced can lead me about concepts and suggest design ideas about this.
As commented by#John Hanley, your question has concepts that do not exist.
To invoke Cloud Run authentication to specific users to use your serverless backend application, go through below required possible systems designs :
1)Initially design how to describe IAM roles that are associated with Cloud Run, and list the permissions that are contained in each role.
2)Design how to secure and Configure Cloud Run to limit access to Cloud Run service with Identity aware Proxy(IAP).
3)Design how to create a Serverless VPC Access connector and also know how to use IAP for TCP forwarding within a VPC Service Controls perimeter.
4)Step by step implementation of how to use IAP to secure portal access without using a Virtual Private Network (VPN). IAP simplifies implementing a zero-trust access model and takes less time than a VPN for remote workers both on-premises and in cloud environments with a single point of control for managing access to your apps.
Solution to the what I had in mind was could be accomplished by Identity-Aware Proxy.
A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.
What is the difference between Agent User ID (Settings tab) and User/Password (Transport tab)? Please share the scenarios of both two when configuring the replicating agents in AEM.
This is well documented in Adobe's documentation here
The context that is missing is to understand the how ACLs work, each user/group has certain privileges/rights; which outside normal CRUD operations include Read ACL, Edit ACL and Replicate. You can read about them here
Now coming to your question, a replication agent has host configuration (the system on which it is setup) and target configuration (the system it connects to). Agent User ID is used for the host system while User/Password on transport tab is for the target system.
For a replication agent on author, the user used in Agent User Id must have read and replicate rights on all path that need to be processed where as user specified in User/Password on transport tab must have create/write access to replicate the content on Publish instance.
When using a Bamboo cloud agent, on Windows, you're instructed to have a Bamboo Windows user with a default known password: Atlassian1.
It clearly says that this user should be configured to denied remote login.
But still, it's an active Windows user with a fair bit of permissions. Bamboo's server (cloud) interacts with the machine in a known port - 26224. Through this channel it sends all build commands, get build status from the remote agent etc.
What prevents a hacker from scanning the Internet, find a host with port 26224 open and start talking with the Bamboo agent? How does the agent know for sure that it talks to a legitimate Bamboo CI server?
I'm asking that in order to be fully confident that there is no possible attack vector.
The Security documentation for Bamboo states:
Please note the following security implications when enabling remote agents for Bamboo:
No encryption of data passed between server and agent — this includes data such as:
login credentials for version control repositories
build logs
build artifacts
No authentication of the agent or server — this could result in unauthorised actions being taken on your system, such as:
Unauthorised parties installing new remote agents — version control repository login credentials could be stolen.
Unauthorised parties masquerading as a Bamboo server — the unauthorised server could pass malicious code to the agent to run.
See Agent authentication for more information.
We strongly recommend that you do not enable remote agent installation on any Bamboo instance accessible from a public or untrusted network. Creating remote agents is Disabling and enabling remote agents support by default.
For public facing agents, Atlassian strongly recommends securing them which is done using SSL. See Securing your remote agents which contains this note:
This page applies to remote agents and not elastic agents. Elastic agents are secured automatically by the Bamboo server and no additional steps are required.
Further more to the Elastic Piece, their documentation on Elastic Bamboo Security states:
All traffic sent between the agents located in EC2 and the Bamboo server is tunnelled through an SSL-encrypted tunnel. The tunnel will be initiated from the Bamboo Server to the EC2 instance, which means that you don't need to allow any inbound connections to your server. You will need to permit outbound traffic from the server on the tunnel port, however - the default port number is 26224. On the EC2 instance, only the tunnel port needs to be open for inbound traffic.
Our product (Network Forensics and analytics tool) has a requirement to dissect RDP sessions on Windows 200x servers and
Map each session to a logged-in account.
Track all TCP/UDP sessions that are going to the internet.
URLs visited
External Servers and ports connected to, etc.
I have designed a code that can achieve this by installing an NT service on each of the terminal servers. This service will mine the data on that server and push it to my linux-based appliance. Alternatively it can log the information to the local Eventlog and then I can use simple WMI calls to retrieve this information.
However, I would like to know if there is a way to retrieve all TCP/UDP connections by polling the terminal servers externally (via WMI or otherwise) and gather the same information. Basically, I am trying to check if there is a way to avoid installing anything on the Windows terminal servers.
Thanks,
-Chandra