Can you debug java code in a Bamboo java spec #BambooSpec main method? - bamboo

I'm using bamboo and bamboo java-spec using the pipeline as java code in a bitbucket hosted project.
I'm trying to use a json file as a configuration file to specify which stages I want to run in my pipeline.
So I've created a configuration.json file. And i've added the following code in my #BambooSpec annotated PlanSpec class.
private static Map<?, ?> getConfiguration(String configurationFile) throws Exception {
System.out.println("Does this work at all?");
Map<?, ?> map = null;
try {
// create Gson instance
Gson gson = new Gson();
// create a reader
Reader reader = Files.newBufferedReader(Paths.get(configurationFile));
// convert JSON file to map
map = gson.fromJson(reader, Map.class);
// print map entries
for (Map.Entry<?, ?> entry : map.entrySet()) {
System.out.println(entry.getKey() + "=" + entry.getValue());
}
// close reader
reader.close();
} catch (Exception ex) {
ex.printStackTrace();
throw ex;
}
return map;
}
But bamboo only shows logs for what's run as part of the plan spec. The System.out.println's, are not visable.
Is there a way to debug my code at runtime?
Edit: in the mean time I found out I can just run my code locally in the IDE. And then it'll complain about not finding a .credentials file. But that doesn't matter. I can at least test the code, from before it publishes the plan.

Related

Register Hibernate 5 Event Listeners

I am working on a legacy non-Spring application, and it is being migrated from Hibernate 3 to Hibernate 5.6.0.Final (latest at this time). I have generally never used Hibernate Event Listeners in my work, so this is quite new to me, and I am studying these in Hibernate 5.
Currently in some test class we have defined the code this way for Hibernate 3:
protected static Configuration createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
config.setListener("pre-insert", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-update", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-delete", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-load", "com.app.server.services.db.eventlisteners.EkoSecurityHibernateEventListener");
return config;
}
This is obviously no longer valid, and I believe I need to create a Hibernate Integrator, which I have done.
public class MyEventListenerIntegrator implements Integrator {
#Override
public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class);
eventListenerRegistry.getEventListenerGroup(EventType.PRE_INSERT).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_UPDATE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_DELETE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_LOAD).appendListener(new MySecurityHibernateEventListener());
}
So, now I believe the next step is to add this to the session via the registry builder. I am using this website to help me:
https://www.boraji.com/hibernate-5-event-listener-example
Because we were using older Hibernate 3, we had code to create our session factory as follows:
protected static SessionFactory buildSessionFactory(Database db)
{
if (db == null) {
throw new NullPointerException("Database specifier cannot be null");
}
try {
Configuration config = createSessionFactoryConfiguration(db);
String url = config.getProperty("connection.url");
String user = config.getProperty("connection.username");
String password = config.getProperty("connection.password");
try {
String dbDriver = config.getProperty("hibernate.connection.driver_class");
Class.forName(dbDriver);
Connection conn = DriverManager.getConnection(url, user, password);
}
catch (SQLException error) {
logger.info("Didn't find driver, on QA or production, so it's okay to assume we have DB connection");
error.printStackTrace();
}
SessionFactory sessionFactory = config.buildSessionFactory();
sessionFactoryConfigs.put(sessionFactory, config); // Cannot recover config from factory instance, must be stored.
return sessionFactory;
}
catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
logger.error("Initial SessionFactory creation failed.", ex);
throw new ExceptionInInitializerError(ex);
}
}
The link that I referred to above has a much different way of creating the sessionfactory. So, I'll be testing that out to see if it works in our app.
Without Spring handling our sessions and transactions, in this app it is coded by hand the way it was done before Spring, and I haven't seen that kind of code in years.
I solved this issue with the help from the link I provided above. However, I didn't copy exactly what they did, but some of it helped. My solution is as follows:
protected static SessionFactory createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
BootstrapServiceRegistry bootstrapRegistry =
new BootstrapServiceRegistryBuilder()
.applyIntegrator(new EkoEventListenerIntegrator())
.build();
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder(bootstrapRegistry).applySettings(config.getProperties()).build();
SessionFactory sessionFactory = config.buildSessionFactory(serviceRegistry);
return sessionFactory;
}
This was it. I tried multiple different ways to register the events without the BootstrapServiceRegistry, but none of those worked. I did have to create the integrator. What I did NOT include was the following:
MetadataSources sources = new MetadataSources(serviceRegistry )
.addPackage("com.myproject.server.model");
Metadata metadata = sources.getMetadataBuilder().build();
// did not create the sessionFactory this way
sessionFactory = metadata.getSessionFactoryBuilder().build();
If I had gone further and use this method to create the sessionFactory, then all of my queries would have been complaining about not being able to find the parameterName, which is something else.
The Hibernate Integrator and this method to create the sessionFactory is all for the unit tests. Without registering these events, one unit test would fail, and now it doesn't. So, this solves my problem for now.

Arquillian ShrinkWrap how to add an asset to the file system path

I am importing a library that reads from the file system instead of my web archive's resource folder. I want to be able to essentially mock that file by adding an asset with that path using ShrinkWrap, so I can run tests on my build server without guaranteeing the file system has all these files. I tried to add a String Asset in the appropriate path, but the code can't find that asset. Here's an example of what I'm trying to achieve.
Rest Resource
#Path("/hello-world")
public class HelloWorldResource {
#GET
public Response getHelloWorld(){
return Response.ok(getFileContent()).build();
}
private String getFileContent() {
StringBuilder builder = new StringBuilder();
try {
BufferedReader bufferedReader = new BufferedReader(
new FileReader(
"/usr/myFile.txt"));
String line = bufferedReader.readLine();
while (line != null) {
builder.append(line);
line = bufferedReader.readLine();
}
}
catch (Exception e) {
e.printStackTrace();
}
return builder.toString();
}
}
Test
#RunWith(Arquillian.class)
public class HelloWorldResourceTest {
#Deployment
public static WebArchive createDeployment()
{
WebArchive webArchive = ShrinkWrap
.create(WebArchive.class)
.addPackages(true,
HelloWorldApplication.class.getPackage(),
HelloWorldResource.class.getPackage(),
Hello.class.getPackage())
.add(new StringAsset("Blah"),"/usr/myFile.txt")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
System.out.println("WebArchive: " + webArchive.toString(true));
return webArchive;
}
#Test
#RunAsClient
public void testHello(
#ArquillianResteasyResource("hello-world") final WebTarget webTarget)
{
final Response response = webTarget
.request(MediaType.APPLICATION_JSON)
.get();
String hello = response.readEntity(String.class);
System.err.println("Hello: " + hello);
Assert.assertEquals("Status is not OK", response.getStatus(), 200);
}
}
Web Archive toString
/WEB-INF/
/WEB-INF/classes/
/WEB-INF/classes/com/
/WEB-INF/classes/com/
/WEB-INF/classes/com/
/WEB-INF/classes/com/helloworld/
/WEB-INF/classes/com/helloworld/application/
/WEB-INF/classes/com/helloworld/application/HelloWorldApplication.class
/WEB-INF/classes/com/helloworld/resource/
/WEB-INF/classes/com/helloworld/resource/HelloWorldResourceTest.class
/WEB-INF/classes/com/helloworld/resource/HelloWorldResource.class
/WEB-INF/classes/com/helloworld/dataobjects/
/WEB-INF/classes/com/helloworld/dataobjects/Hello.class
/WEB-INF/beans.xml
/usr/
/usr/myFile.txt
I get the following error:
java.io.FileNotFoundException: /usr/myFile.txt (No such file or
directory)
Seems like ShrinkWrap is adding /usr/myFile.txt as a relative path within the archive instead of making it seem like /usr/myFile.txt is at the root directory of my file system. Is there any way I can get ShrinkWrap to do what I want?
Shrinkwrap is intended to create archives, so the API is scoped to create assets within the archive you are creating. If you want to have resources created in the regular filesystem simply use JDK, there is nothing Shrinkwrap could help you with.
Alternatively, if possible, change your resource to read resources from the classpath, not filesystem path. With this approach, you can easily swap content for the test using Shrinkwrap as you are trying now with your example.

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

Can't upload files in spring boot

I've been struggling with this for the past 3 days now, I keep getting the following exception when I try upload a file in my spring boot project.
org.springframework.web.multipart.support.MissingServletRequestPartException: Required request part 'file' is not present
I'm not sure if it makes a differance but I am deploying my application as a war onto weblogic,
here is my controller
#PostMapping
public AttachmentDto createAttachment(#RequestParam(value = "file") MultipartFile file) {
logger.info("createAttachment - {}", file.getOriginalFilename());
AttachmentDto attachmentDto = null;
try {
attachmentDto = attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (IOException e) {
e.printStackTrace();
}
return attachmentDto;
}
multi part beans I can see in spring boot actuator
payload seen in chrome
Name attribute is required for #RequestParm 'file'
<input type="file" class="file" name="file"/>
You can try use #RequestPart, because it uses HttpMessageConverter, that takes into consideration the 'Content-Type' header of the request part.
Note that #RequestParam annotation can also be used to associate the part of a "multipart/form-data" request with a method argument supporting the same method argument types. The main difference is that when the method argument is not a String, #RequestParam relies on type conversion via a registered Converter or PropertyEditor while #RequestPart relies on HttpMessageConverters taking into consideration the 'Content-Type' header of the request part. #RequestParam is likely to be used with name-value form fields while #RequestPart is likely to be used with parts containing more complex content (e.g. JSON, XML).
Spring Documentation
Code:
#PostMapping(consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public AttachmentDto createAttachment(#RequestPart("file") MultipartFile file) {
logger.info("Attachment - {}", file.getOriginalFilename());
try {
return attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (final IOException e) {
logger.e("Error creating attachment", e);
}
return null;
}
You are using multi part to send files so there is nothing much configuration to do to get desired result.
I m having the same requirement and my code just run fine :
#RestController
#RequestMapping("/api/v2")
public class DocumentController {
private static String bucketName = "pharmerz-chat";
// private static String keyName = "Pharmerz"+ UUID.randomUUID();
#RequestMapping(value = "/upload", method = RequestMethod.POST, consumes = MediaType.MULTIPART_FORM_DATA)
public URL uploadFileHandler(#RequestParam("name") String name,
#RequestParam("file") MultipartFile file) throws IOException {
/******* Printing all the possible parameter from #RequestParam *************/
System.out.println("*****************************");
System.out.println("file.getOriginalFilename() " + file.getOriginalFilename());
System.out.println("file.getContentType()" + file.getContentType());
System.out.println("file.getInputStream() " + file.getInputStream());
System.out.println("file.toString() " + file.toString());
System.out.println("file.getSize() " + file.getSize());
System.out.println("name " + name);
System.out.println("file.getBytes() " + file.getBytes());
System.out.println("file.hashCode() " + file.hashCode());
System.out.println("file.getClass() " + file.getClass());
System.out.println("file.isEmpty() " + file.isEmpty());
/**
BUSINESS LOGIC
Write code to upload file where you want
*****/
return "File uploaded";
}
None of the above solutions worked for me, but when I digged deeper i found that spring security was the main culprit. Even if i was sending the CSRF token, I repeatedly faced the issue POST not supported. I came to know that i was receiving forbidden 403 when i inspected using developer tools in google chrome and saw the status code in the network tab. I added the mapping to ignoredCsrfMapping in Spring Security configuration and then it worked absolutely without any other flaw. Don't know why i was not allowed to post multipart data by security. Some of the mandatory setting that needs to be stated in application.properties file are as follows:
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=10MB
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=10MB
spring.http.multipart.enabled=true

Glassfish Custom Properties Resource Not Loading From File

I've deployed a webapp (war) to Glassfish v3 and I am trying to get it to read from properties defined in a custom resource.
In my app, I've defined the properties as:
#Resource(mappedName = "TestServletProperties")
private Properties properties;
and make use of it like this:
protected void doGet(final HttpServletRequest request,
final HttpServletResponse response) throws ServletException,
java.io.IOException
{
String propertyOne = properties.getProperty("testServlet.propertyOne");
String propertyTwo = properties.getProperty("propertyTwo");
StringBuffer buffer = new StringBuffer("Properties Retrieved\n");
buffer.append("Property One: " + propertyOne + "\n");
buffer.append("Property Two: " + propertyTwo + "\n");
try
{
response.getWriter().write(buffer.toString());
}
catch (Exception ex)
{
try
{
log.warn("Exception thrown", ex);
response.getWriter().write(ex.getStackTrace().toString());
}
catch (IOException io)
{
log.warn("IOException thrown", io);
}
}
}
In Glassfish, I've created a JNDI Custom Resource called TestServletProperties of type java.util.Properties and using factory class org.glassfish.resources.custom.factory.PropertiesFactory. In the resource there is one property "fileName" with its value set to the absolute path of the properties file:
/Program Files/glassfishv3/glassfish/domains/domain1/applications/Test/WEB-INF/classes/TestServlet_lab.properties
I've also tried
c:\Program Files\glassfishv3\glassfish\domains\domain1\applications\Test\WEB-INF\classes\TestServlet_lab.properties
I have confirmed that the file exists and contains the referenced properties. Unfortunately, I'm getting back "null" for both values in my response.
Any thoughts?
The solution is that you have to use the fully qualified "org.glassfish.resources.custom.factory.PropertiesFactory.fileName" versus just "fileName".
The reason might be that you have a web.xml file with the header of a 2.4 (or older) servlet version.
#Resource and other annotations are only processed if you have at least version 2.5 in the header of web.xml. Be sure that you do not simply change the version but copy and paste the new header from somewhere as the namespace is different.
Hope this helps