When using a Bamboo cloud agent, on Windows, you're instructed to have a Bamboo Windows user with a default known password: Atlassian1.
It clearly says that this user should be configured to denied remote login.
But still, it's an active Windows user with a fair bit of permissions. Bamboo's server (cloud) interacts with the machine in a known port - 26224. Through this channel it sends all build commands, get build status from the remote agent etc.
What prevents a hacker from scanning the Internet, find a host with port 26224 open and start talking with the Bamboo agent? How does the agent know for sure that it talks to a legitimate Bamboo CI server?
I'm asking that in order to be fully confident that there is no possible attack vector.
The Security documentation for Bamboo states:
Please note the following security implications when enabling remote agents for Bamboo:
No encryption of data passed between server and agent — this includes data such as:
login credentials for version control repositories
build logs
build artifacts
No authentication of the agent or server — this could result in unauthorised actions being taken on your system, such as:
Unauthorised parties installing new remote agents — version control repository login credentials could be stolen.
Unauthorised parties masquerading as a Bamboo server — the unauthorised server could pass malicious code to the agent to run.
See Agent authentication for more information.
We strongly recommend that you do not enable remote agent installation on any Bamboo instance accessible from a public or untrusted network. Creating remote agents is Disabling and enabling remote agents support by default.
For public facing agents, Atlassian strongly recommends securing them which is done using SSL. See Securing your remote agents which contains this note:
This page applies to remote agents and not elastic agents. Elastic agents are secured automatically by the Bamboo server and no additional steps are required.
Further more to the Elastic Piece, their documentation on Elastic Bamboo Security states:
All traffic sent between the agents located in EC2 and the Bamboo server is tunnelled through an SSL-encrypted tunnel. The tunnel will be initiated from the Bamboo Server to the EC2 instance, which means that you don't need to allow any inbound connections to your server. You will need to permit outbound traffic from the server on the tunnel port, however - the default port number is 26224. On the EC2 instance, only the tunnel port needs to be open for inbound traffic.
Related
How can I monitor the authentication logs on the Google cloud platform?
For example, to check if someone has tried to make unauthorized access.
With Admin Activity audit logs you will be able to answer the questions of "who did what, where, and when?" within your Google Cloud resources. It provides the following audit logs for each Cloud project, folder, and organization:
Admin Activity audit logs
Data Access audit logs
System Event audit logs
Policy Denied audit logs
You can obtain more information on Cloud Audit Logs, It will be useful to see all the events that happen into your projects, but it might not be useful for the information you want to see.
Nevertheless, there is a tool Event Threat Detection that uses log data from inside your systems and when a threat is detected, Event Threat Detection writes a Finding to Security Command Center and to a Cloud Logging project.
For example:
Event Threat Detection detects brute force of password authentication SSH by examining syslog logs for repeated failures followed by a success.
But this feature is available only for Security Command Center Premium tier.
On the other hand you mentioned that you have some VM instances and want to prevent attacks.
I recommend you to check the following documentation: Securely connecting to VM instances
There are several methods for protecting services on VMs with external IP addresses explained in this document, including Firewalls, HTTPS and SSL, port forwarding over SSH, and SOCKS proxy over SSH.
For example, by creating firewall rules, you can restrict all traffic to a network or target machines on a given set of ports to specific source IP addresses.
I would like to host a password protected static website on a server, and meet the following 2 requirements:
The static website credentials MUST NOT give any additional access to the hosting server.
The hosting must play nicely with other IIS hosted websites
The hosting server is running Windows 10 Pro.
I've identified 4 options:
Host it in IIS with Basic Authentication enabled
Host it in Apache, separate port, secure with .htpasswd file
Host it in Apache in a VM, use a bridged network, secure with .htpasswd file
Develop a middleware/route request authentication application
Option 1:
Evidently, this option requires a whole new User on the computer.
I do not understand the limitations of a new user's access.
When I hit WindowsKey + R, and run netplwiz, I can configure the user to belong to one of these groups:
Users(default): Users are prevented from making accidental or intentional system-wide changes and can run most applications.
Guest: Guests have the same access as members of the Users group by default, except for the Guest account, which is further restricted as described earlier.
IIS_IUSR: Built-in group used by Internet Information Services.
I can not find the following information in any Microsoft docs:
How IIS_IUSR is "used" by IIS
If any of these groups restrict all access, other than viewing the Basic Auth website
An exhaustive list of permissions granted by the user login credentials, and each group
This method seems confusing and annoying at best, and a complete security failure at worst.
Option 2:
This seems more secure to me, because I can understand the limitations of the user access better.
Option 3:
This seems even more secure, because the hosting server is not directly accessed.
I do not know if this creates other security vulnerabilities though.
Option 4:
This one seems the most secure, because I have full understanding and control over the website's access.
This could take a lot of work though.
An organization can adopt the following policy to protect itself against web server attacks.
Patch management– this involves installing patches to help secure the server. A patch is an update that fixes a bug in the software. The patches can be applied to the operating system and the web server system.
Secure installation and configuration of the operating system
Secure installation and configuration of the web server software
Vulnerability scanning system– these include tools such as Snort, NMap, Scanner Access Now Easy (SANE)
Firewalls can be used to stop simple DoS attacks by blocking all traffic coming the identify source IP addresses of the attacker.
Antivirus software can be used to remove malicious software on the server
Disabling Remote Administration
Default accounts and unused accounts must be removed from the system
Default ports & settings (like FTP at port 21) should be changed to custom port & settings (FTP port at 5069
In Weblogic, I have more than one Machines created using Node Manager. We have been told to setup SSL implementation for our Application which is deployed across created machines in a single Weblogic Admin Console.
So for the Application we had configured certificate using .jks file and configured SSL listen port by enabling it.
However we have been told to secure Node Manager machines in which application are deployed across as well. While enabling Node Manager type to SSL instead of Plain I am getting SSLException. By the fact we no need to secure Machines which were created using Node Manager, only securing Application is sufficient. Is am I right. Else is it required to Secure Machines -> Node Manager as well.
When I am turning SSL in Machines -> Node Manager, what are the things I have to consider to avoid SSLException. Is the Weblogic restart required If configure this or so. For now I do not have UNIX access, hence I couldn't do that at this moment.
Please advise on this situation. Without securing Machines -> Node Manager I am able run the application. But not able to access it using https. Only http for the Application is working.
Please advise on the situation.
SSL for node manager is optional as there's no application related sensitive data that flows in this layer.
You mention even after configuring jks you can't get the server and hence the application listening on https. Could you elaborate what steps did you follow. Note this has nothing to do with node manager
I've followed the instructions here: http://guac-dev.org/doc/gug/installing-guacamole.html
This says
Guacamole is separated into two pieces: guacamole-server, which provides the guacd proxy and related libraries, and guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
guacamole-client is available in binary form, but guacamole-server must be built from source. Don't be discouraged: building the components of Guacamole from source is not as difficult as it sounds, and the build process is automated. You just need to be sure you have the necessary tools installed ahead of time. With the necessary dependencies in place, building Guacamole only takes a few minutes.
And then proceed to describe how to install guacamole-server and use it. I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
Or have I totally misunderstood this?
The key phrase in the quoted documentation is:
... guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
"guacamole-client" is the web application and the client. When a user visits the URL for your Guacamole server, logs in, and clicks on a connection, they are connected to the corresponding remote desktop via Guacamole's JavaScript client which is served to their browser like any other web application.
I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
The list you see when you first log in to your Guacamole server is not the list of clients that have connected; it is the list of connections to remote desktops which are available. If you click on one of those connections, you will be connected using Guacamole's own built-in JavaScript client.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
The remote desktop does not need to do anything - Guacamole will simply connect to it. You can see a video of the overall user experience on the Guacamole website which may hopefully clear things up for you:
https://vimeo.com/116207678
Overall:
You deploy guacamole-client (the web application) and install guacamole-server (the remote desktop proxy that the web application uses in the backend). The combination of these two pieces of software makes up a typical Guacamole server.
You and your users can log in through the web application and connect to remote desktops using a web browser.
You do not need to explicitly run a client.
It looks like this
Internet -> Guacamole server (on the local network) -> Desktop pc
I installed Guacamole in a vmware enviroment on Ubuntu.
There is a file in /etc/guacamole what is called user-mapping.xml
In that file you add or edit the connections available to the user you want.
A connection for that user must be set between the <connection> tags
I need a way to use Glassfish 3.1.2.2 admin service (REST call to deploy and configre) from a remote machine and from local machine (command line and applications).
It is clear that for remote access it is necessary to enable secure admin. If we enable secure admin it will break all local access from applications. These application can not be changed to using https to access the admin service. Only thing I can change is that we can use a different port.
I see two possible ways for me:
Using a hack. So I can administrate with secure administrate disable. So I can use plan http. For use a possible solution, because this machine used internally in a test environment.
Configure Glassfish that we can use admin service remote via secure access https and from a local environment with http.
We prefer solution 1, because it fit better in our environment and we have lesser effort. At the moment I see no way or exist a solution (not for production)?
I tried something for solution 2, simular to http-listener-1 http-listener-2. So use two ports 4848 for local unsecure access and as example 4949 for remote secure access.But I always fail with configuration. So I start with a step by step configuration. First enable admin interface oon two ports and as second step I want to add the secure access to the new port admin-listener. But I got only one of the ports working.Please can anyone help me with target configuration? Any domain.xml will be welcome.
Thanks florian
You can try to use SSH and run asadmin utilities from remote machine.