java localstack lambada - how to run lambada and see logs - aws-java-sdk

I am trying to run lambada using localStack and to see the log ...
so thee running class looks like :
public class LambdaLoader implements RequestHandler<Object, String> {
#Override
public String handleRequest(Object input, Context context) {
LambdaLogger logger = context.getLogger();
logger.log("\"started\"");
return "Complete";
}
I am running it
public class LambdaLoaderIT {
#Test
void handleRequest() throws InterruptedException, IOException {
AwsClientBuilder.EndpointConfiguration endpointConfiguration =
new AwsClientBuilder.EndpointConfiguration(
"http://localhost:4566", Regions.US_EAST_1.getName());
AWSLambda lambdaClient = createLambdaClient(endpointConfiguration);
createLambda(lambdaClient);
}
private AWSLambda createLambdaClient(
AwsClientBuilder.EndpointConfiguration endpointConfiguration) {
return AWSLambdaClientBuilder.standard()
.withEndpointConfiguration(endpointConfiguration)
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("dummyAccessKey", "dummySecretKey")))
.build();
}
private void createLambda(AWSLambda clientLambda) throws IOException {
CreateFunctionRequest functionRequest = new CreateFunctionRequest();
functionRequest.setHandler("com.ssp.coreTeam.LambdaLoader::handleRequest");
functionRequest.setFunctionName("handleRequest");
functionRequest.setTimeout(900);
functionRequest.setRuntime("java11");
functionRequest.setRole("arn:aws:lambda:us-east-1:000000000000:function:handleRequest");
FunctionCode code = new FunctionCode();
File file = new File("target/my-lambda-0.0.0-SNAPSHOT.jar");
FileInputStream fileInputStream = new FileInputStream(file);
byte[] bytes = IoUtils.toByteArray(fileInputStream);
code.setZipFile(ByteBuffer.wrap(bytes));
functionRequest.setCode(code);
Environment environment = new Environment();
environment.setVariables(Map.of("LAMBDA_ENV","dev"));
functionRequest.setEnvironment(environment);
CreateFunctionResult function = clientLambda.createFunction(functionRequest);
System.out.println(function);
}
in addition, this is how I have configured lambada in the docker-compose file (notice LAMBDA_EXECUTOR=local ):
localstack:
image: 'localstack/localstack'
ports:
- '4566:4566'
environment:
- SERVICES=lambda,ssm
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
How can I see the logs and what happened there?

You've already set the DEBUG to 1, so the logs are there.
To read them, use the standard Docker Compose facilities for logs. In your case it should be something like:
docker-compose logs localstack
I would also recommend you to use a small library to inject AWS clients in your tests, named aws-junit5. It would greatly simplify your tests. It supports Lambda clients for both AWS Java SDK 1.x and 2.x. The usage is pretty straightforward:
#ExtendWith(Lambda.class)
class AmazonDynamoDBInjectionTest {
#AWSClient(endpoint = Endpoint.class) // Endpoint configuration
private AWSLambda client;
#Test
void test() {
CreateFunctionRequest functionRequest = new CreateFunctionRequest();
functionRequest.setHandler("com.ssp.coreTeam.LambdaLoader::handleRequest");
functionRequest.setFunctionName("handleRequest");
functionRequest.setTimeout(900);
functionRequest.setRuntime("java11");
functionRequest.setRole("arn:aws:lambda:us-east-1:000000000000:function:handleRequest");
FunctionCode code = new FunctionCode();
File file = new File("target/my-lambda-0.0.0-SNAPSHOT.jar");
FileInputStream fileInputStream = new FileInputStream(file);
byte[] bytes = IoUtils.toByteArray(fileInputStream);
code.setZipFile(ByteBuffer.wrap(bytes));
functionRequest.setCode(code);
Environment environment = new Environment();
environment.setVariables(Map.of("LAMBDA_ENV","dev"));
functionRequest.setEnvironment(environment);
// Just use client here, it will be auto-injected!
CreateFunctionResult function = client.createFunction(functionRequest);
// Rest of your test
System.out.println(function);
}
}
There is even an example of CI/CD with GitHub, which is very similar to what you're doing.

Related

string Concatenate in yml file and use with aspnetcore 2.1

yml string concatenate does not work with .NET applications.I have tried by removing '$' sign, but it is still not working(Java application uses $ sign - Working fine with Java apps). It is working fine for a single value, but not with concatenation.
yml-01
cicd:
dbname: 172.10.10.110
port: 5432
yml-02
datasource:
url: jdbc:postgresql://${cicd:dbname}:${cicd:port}/sample-db
A solution for placeholder resolution in .NET Configuration (similar to that provided by spring) is available in Steeltoe.Common. We haven't added WebHostBuilder or IConfigurationBuilder extensions just yet, but if you add a recent reference to Steeltoe.Common from the Steeltoe Dev feed you should be able to do something like this:
public static IWebHostBuilder ResolveConfigurationPlaceholders(this IWebHostBuilder hostBuilder, LoggerFactory loggerFactory = null)
{
return hostBuilder.ConfigureAppConfiguration((builderContext, config) =>
{
config.AddInMemoryCollection(PropertyPlaceholderHelper.GetResolvedConfigurationPlaceholders(config.Build(), loggerFactory?.CreateLogger("Steeltoe.Configuration.PropertyPlaceholderHelper")));
});
}
The code above is used in the Steeltoe fork of eShopOnContainers
You should take a look at YamlDotNet.
Here's an example of how to solve your problem using that lib
using YamlDotNet.RepresentationModel;
using YamlDotNet.Core;
Then in your method
var dbname = "172.10.10.110";
var port = "5432";
string content;
using (var reader = new StreamReader("your yml file"))
{
content = reader.ReadToEnd();
}
var doc = new StringReader(content);
var yaml = new YamlStream();
yaml.Load(doc);
// Add the url where you use string interpolation to replace the values
var ymlFile = (YamlMappingNode)yaml.Documents[0].RootNode;
ymlFile.Children["datasource"] = new YamlMappingNode
{
{ "url", $"jdbc:postgresql://{dbname}:{port}/sample-db" }
};
yaml.Save(File.CreateText("C:\\yourNewFile.yml"), assignAnchors: false);
Here's a link to the NetCore package
I've solved this by writing an extension method to the IConfiguration interface.
public static string ReadFromConfigRepo(this IConfiguration configuration, string key)
{
var pattern = #"\{(.*?)\}";
var query = configuration[key];
if (query.Contains('{'))
{
var matches = Regex.Matches(query, pattern);
string value;
foreach (Match m in matches)
{
value = configuration[m.Value.Substring(1, m.Value.Length - 2)];
query = query.Replace(m.Value, value);
}
}
return query.Trim();
}

How to format ServiceStack Redis connection string

How can I format the below Redis connection string:
Connection string:
myIP,keepAlive=180,ConnectRetry=30,ConnectTimeout=5000
I started writing a unit test but keep getting a input string was not in correct format error message
[TestFixtureSetUp]
private void Init()
{
var redisConnectionString = "myIP,keepAlive=180,ConnectRetry=30,ConnectTimeout=5000";
_clientsManager = new PooledRedisClientManager(redisConnectionString);
}
[Test]
public void CanConnectToRedis()
{
var readWrite = (RedisClient) _clientsManager.GetClient();
using (var redis = _clientsManager.GetClient())
{
var redisClient = redis;
}
}
See the connection string format on the ServiceStack.Redis home page:
redis://localhost:6379?ConnectTimeout=5000&IdleTimeOutSecs=180
Which can be used in any of the Redis Client Managers:
var redisManager = new RedisManagerPool(
"redis://localhost:6379?ConnectTimeout=5000&IdleTimeOutSecs=180");
using (var client = redisManager.GetClient())
{
client.Info.PrintDump();
}
The list of available configuratoin options are also listed on the homepage.

Hadoop RPC server doesn't stop

I was trying to create a simple parent child process with IPC between them using Hadoop IPC. It turns out that program executes and prints the results but it doesn't exit. Here is the code for it.
interface Protocol extends VersionedProtocol{
public static final long versionID = 1L;
IntWritable getInput();
}
public final class JavaProcess implements Protocol{
Server server;
public JavaProcess() {
String rpcAddr = "localhost";
int rpcPort = 8989;
Configuration conf = new Configuration();
try {
server = RPC.getServer(this, rpcAddr, rpcPort, conf);
server.start();
} catch (IOException e) {
e.printStackTrace();
}
}
public int exec(Class klass) throws IOException,InterruptedException {
String javaHome = System.getProperty("java.home");
String javaBin = javaHome +
File.separator + "bin" +
File.separator + "java";
String classpath = System.getProperty("java.class.path");
String className = klass.getCanonicalName();
ProcessBuilder builder = new ProcessBuilder(
javaBin, "-cp", classpath, className);
Process process = builder.start();
int exit_code = process.waitFor();
server.stop();
System.out.println("completed process");
return exit_code;
}
public static void main(String...args) throws IOException, InterruptedException{
int status = new JavaProcess().exec(JavaProcessChild.class);
System.out.println(status);
}
#Override
public IntWritable getInput() {
return new IntWritable(10);
}
#Override
public long getProtocolVersion(String paramString, long paramLong)
throws IOException {
return Protocol.versionID;
}
}
Here is the child process class. However I have realized that it is due to RPC.getServer() on the server side that it the culprit. Is it some known hadoop bug, or I am missing something?
public class JavaProcessChild{
public static void main(String...args){
Protocol umbilical = null;
try {
Configuration defaultConf = new Configuration();
InetSocketAddress addr = new InetSocketAddress("localhost", 8989);
umbilical = (Protocol) RPC.waitForProxy(Protocol.class, Protocol.versionID,
addr, defaultConf);
IntWritable input = umbilical.getInput();
JavaProcessChild my = new JavaProcessChild();
if(input!=null && input.equals(new IntWritable(10))){
Thread.sleep(10000);
}
else{
Thread.sleep(1000);
}
} catch (Throwable e) {
e.printStackTrace();
} finally{
if(umbilical != null){
RPC.stopProxy(umbilical);
}
}
}
}
We sorted that out via mail. But I just want to give my two cents here for the public:
So the thread that is not dying there (thus not letting the main thread finish) is the org.apache.hadoop.ipc.Server$Reader.
The reason is, that the implementation of readSelector.select(); is not interruptable. If you look closely in a debugger or threaddump, it is waiting on that call forever, even if the main thread is already cleaned up.
Two possible fixes:
make the reader thread a deamon (not so cool, because the selector
won't be cleaned up properly, but the process will end)
explicitly close the "readSelector" from outside when interrupting the threadpool
However, this is a bug in Hadoop and I have no time to look through the JIRAs. Maybe this is already fixed, in YARN the old IPC is replaced by protobuf and thrift anyways.
BTW also this is platform dependend on the implementation of the selectors, I observed these zombies on debian/windows systems, but not on redhat/solaris.
If anyone is interested in a patch for Hadoop 1.0, email me. I will sort out the JIRA bug in the near future and edit this here with more information. (Maybe this is fixed in the meanwhile anyways).

org.apache.commons.io.FileCleaningTracker does not delete temp files unless explicitly calling System.gc()?

I am working on a upload image feature for my web app, and am having a strange issue with the "FileCleaningTracker" from apache commons fileupload. I have a ImageUploadService with a instance variable FileCleaningTracker, then I have a upload method that creates an instance of DiskFileItemFactory and then references the FileCleaningTracker, after the upload method completes successfully, I set the FileCleaningTracker of DiskFileItemFactory to null, so i would expect the DiskFileItemFactory to be garbage collected and then the underlying subclass of PhantomReference in FileCleaningTracker will be notified hence delete the temp file the DiskFileItemFactory created.
But that does not happen until I null the DiskFileItemFactory and call System.gc() (only nulling the DiskFileItemFactory does not help) at the end of the upload method. THis seems very strange to me. Here is my code :
#Override
public void upload(final HttpServletRequest request) {
ValidateUtils.checkNotNull(request, "upload request");
final File tmp = new File(this.tempFolder);
if (!tmp.exists()) {
tmp.mkdir();
}
DiskFileItemFactory fileItemFactory = new DiskFileItemFactory(this.sizeThreshold, tmp);
fileItemFactory.setFileCleaningTracker(this.fileCleaningTracker);
ServletFileUpload uploadHandler = new ServletFileUpload(fileItemFactory);
List items;
try {
items = uploadHandler.parseRequest(request);
} catch (final FileUploadException e) {
throw new ImageUploadServiceException("Error parsing the http servlet request for image upload.", e);
}
final Iterator it = items.iterator();
while (it.hasNext()) {
final DiskFileItem item = (DiskFileItem) it.next();
if (item.isFormField()) {
// log message
} else {
final String fileName = item.getName();
final File destination = this.createFileForUpload(fileName, this.uploadFolder);
FileChannel outChannel;
try {
outChannel = new FileOutputStream(destination).getChannel();
} catch (final FileNotFoundException e) {
throw new ImageUploadServiceException(e);
}
FileChannel inChannel = null;
try {
inChannel = new FileInputStream(item.getStoreLocation()).getChannel();
outChannel.transferFrom(inChannel, 0, item.getSize());
} catch (final IOException e) {
throw new ImageUploadServiceException(String.format("Error uploading image to '%s/%s'.", this.uploadFolder, destination.getName()), e);
} finally {
IOUtils.closeChannel(inChannel);
IOUtils.closeChannel(outChannel);
}
}
}
fileItemFactory.setFileCleaningTracker(null);
}
The above code causes every upload creates a file in the temp folder but does not remove it at the end by the "fileCleaningTracker", possibly because the DiskFileItemFactory instance is not garbage collected(I've failed to see why it shouldn't have) or it has been GCed but not notified by the PhantomReference in fileCleaningTracker(how reliable is PhantomReference?)
I waited 10 minutes and the files are still there, so it should't be because the GC has not run. and there are no exceptions.
Now if I add the following code, the temp files are removed every time after the upload:
fileItemFactory = null;
System.gc();
This looks very strange to me as I would expect the fileItemFactory be GCed without an explict call to System.gc().
Any input will be appreciated.
Thank you.
I have the same problem. The temporary files are never removed even after the server shutdown: GC process had not been started so FileCleaningTracker had no chance to get tracked files to delete from ReferenceQueue and all the files remain on the hard drive.
Due to specific behavior of my application I have to clean up after each upload (files might be very big). Instead of using standard org.apache.commons.io.FileCleaningTracker I am feeling lucky to override this class with my own implementation:
/**
* Cleaning tracker to clean files after each upload with special method invocation.
* Not thread safe and must be used with 1 factory = 1 thread policy.
*/
public class DeleteFilesOnEndUploadCleaningTracker extends FileCleaningTracker {
private List<String> filesToDelete = new ArrayList();
public void deleteTemporaryFiles() {
for (String file : filesToDelete) {
new File(file).delete();
}
filesToDelete.clear();
}
#Override
public synchronized void exitWhenFinished() {
deleteTemporaryFiles();
}
#Override
public int getTrackCount() {
return filesToDelete.size();
}
#Override
public void track(File file, Object marker) {
filesToDelete.add(file.getAbsolutePath());
}
#Override
public void track(File file, Object marker, FileDeleteStrategy deleteStrategy) {
filesToDelete.add(file.getAbsolutePath());
}
#Override
public void track(String path, Object marker) {
filesToDelete.add(path);
}
#Override
public void track(String path, Object marker, FileDeleteStrategy deleteStrategy) {
filesToDelete.add(path);
}
}
If this the right case for you just inject the instance of the class above into your DiskFileItemFactory:
DeleteFilesOnEndUploadCleaningTracker tracker = new DeleteFilesOnEndUploadCleaningTracker();
fileItemFactory.setFileCleaningTracker(tracker);
And don't forget to invoke the cleaning method after your work with uploaded items is done:
tracker.deleteTemporaryFiles();
Forgot to mention: I use commons-fileupload version 1.2.2 and commons-io version 1.3.2.

com.sun.jersey.api.client.UniformInterfaceException (returned a response status of 400)

I am trying to set up file upload example using JAX RS. I could set up the project and successfully upload file in a server location. But i get the following error when file size is more than 10KB (weird!!)
com.sun.jersey.api.client.UniformInterfaceException: POST http://localhost:9090/DOAFileUploader/rest/file/upload returned a response status of 400
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:607)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:507)
at com.sony.doa.rest.client.DOAClient.upload(DOAClient.java:75)
at com.sony.doa.rest.client.DOAMain.main(DOAMain.java:34)
I am new to JAX RS and i'm not sure what exactly the issue is. Do i need to set some parameters client side or server side (like size, timeout etc)?
This is the client side code calling webservice:
public void upload() {
File file = new File(inputFilePath);
FormDataMultiPart part = new FormDataMultiPart();
part.bodyPart(new FileDataBodyPart("file", file, MediaType.APPLICATION_OCTET_STREAM_TYPE));
WebResource resource = Client.create().resource(url);
String response = resource.type(MediaType.MULTIPART_FORM_DATA_TYPE).post(String.class, part);
System.out.println(response);
}
This is the server side code:
#Path("/file")
public class UploadFileService {
#POST
#Path("/upload")
#Consumes(MediaType.MULTIPART_FORM_DATA)
public Response uploadFile(
#FormDataParam("file") InputStream uploadedInputStream,
#FormDataParam("file") FormDataContentDisposition fileDetail) {
String uploadedFileLocation = "e://uploaded/"
+ fileDetail.getFileName();
writeToFile(uploadedInputStream, uploadedFileLocation);
String output = "File uploaded to : " + uploadedFileLocation;
return Response.status(200).entity(output).build();
}
private void writeToFile(InputStream uploadedInputStream,
String uploadedFileLocation) {
try {
OutputStream out = new FileOutputStream(new File(
uploadedFileLocation));
int read = 0;
byte[] bytes = new byte[16000];
out = new FileOutputStream(new File(uploadedFileLocation));
while ((read = uploadedInputStream.read(bytes)) != -1) {
out.write(bytes, 0, read);
}
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
} } }
Please let me know what settings i have to change for file sizes greater than 10KB?
Thanks!
I use org.apache.commons.fileupload.servlet.ServletFileUpload in a Jersey context, and it works fine., and yes, it set the max file size, sorry I missed this before.
here is a snipet of code I use (this is a multipart form, so there are other fields along with the file)
private LibraryUpload parseLibraryUpload(HttpServletRequest request) {
LibraryUpload libraryUpload;
File libraryZip = null;
String name = null;
String version = null;
ServletFileUpload upload = new ServletFileUpload();
upload.setFileSizeMax(MAX_FILE_SIZE);
FileItemIterator iter;
try {
iter = upload.getItemIterator(request);
while (iter.hasNext()) {
....
if (item.isFormField()) {
....
}else{
BufferedInputStream buffer = new BufferedInputStream(stream);
buffer.mark(MAX_FILE_SIZE);
libraryZip = File.createTempFile("fromUpload", null);
IOUtils.copy(buffer, new FileOutputStream(libraryZip));
...
}
I have encountered the same problem with Jersey. I have activated jersey trace but nothing help me.
I have changed the library by an apache Library and I see than the problem with linked to a repository for temporary files for tomcat. The repository was not exist. For files under 10k, the repository was not used.
So, after the repository creation, I used jersey library and all works fine.