Stackdriver Node.js Logging not showing up - express

I have a Node.js application, running inside of a Docker container and logging events using Stackdriver.
It is a Node.Js app, running with Express.js and Winston for logging and using a StackDriverTransport.
When I run this container locally, everything is logged correctly and shows up in the Cloud console. When I run this same container, with the same environment variables, in a GCE VM, the logs don't show up.

What do you mean exactly by locally? Are you running the container on the Cloud Shell vs running it on an instance? Keep in mind that if you create a container or instance that has to do something that needs privileges (like the Stackdriver logging client library) and run it, if that instance doesn't have a service account with that role/privileges set up it won't work.
Yu mentioned that you use the same environment variables, I take that one of the env vars points to your json key file. Is the key file present in that path on the instance?
From Winston documentation it looks like you need to specify the key file location for the service account:
const winston = require('winston');
const Stackdriver = require('#google-cloud/logging-winston');
winston.add(Stackdriver, {
projectId: 'your-project-id',
keyFilename: '/path/to/keyfile.json'
});
Have you checked if this is defined with the key for the service account with a logging role?

Related

getting Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 despite having credentials in config file

I have a typescript/node-based application where the following line of code is throwing an error:
const res = await s3.getObject(obj).promise();
The error I'm getting in terminal output is:
❌ Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
However, I do actually have a credentials file in my .aws directory with values for aws_access_key_id and aws_secret_access_key. I have also exported the values for these with the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I have also tried this with and without running export AWS_SDK_LOAD_CONFIG=1 but to no avail (same error message). Would anyone be able to provide any possible causes/suggestions for further troubleshooting?
Install npm i dotenv
Add a .env file with your AWS_ACCESS_KEY_ID etc credentials in.
Then in your index.js or equivalent file add require("dotenv").config();
Then update the config of your AWS instance:
region: "eu-west-2",
maxRetries: 3,
httpOptions: { timeout: 30000, connectTimeout: 5000 },
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
Try not setting AWS_SDK_LOAD_CONFIG to anything (unset it). Unset all other AWS variables. In Mac/linux, you can do export | grep AWS_ to find others you might have set.
Next, do you have AWS connectivity from the command line? Install the AWS CLI v2 if you don't have it yet, and run aws sts get-caller-identity from a terminal window. Don't bother trying to run node until you get this working. You can also try aws configure list.
Read through all the sections of Configuring the AWS CLI, paying particular attention to how to use the credentials and config files at $HOME/.aws/credentials and $HOME/.aws/config. Are you using the default profile or a named profile?
I prefer to use named profiles, but I use more than one so that may not be needed for you. I have always found success using the AWS_PROFILE environment variable:
export AWS_PROFILE=your_profile_name # macOS/linux
setx AWS_PROFILE your_profile_name # Windows
$Env:AWS_PROFILE="your_profile_name" # PowerShell
This works for me both with an Okta/gimme-aws-creds scenario, as well as an Amazon SSO scenario. With the Okta scenario, just the AWS secret keys go into $HOME/.aws/credentials, and further configuration such as default region or output format go in $HOME/.aws/config (this separation is so that tools can completely rewrite the credentials file without touching the config). With the Amazon SSO scenario, all the settings go in the config.

.NET Core 3.x setting development AWS credentials

I have EC2 instances (via Elastic Beanstalk) running my ASP.Net Core 3.1 web app without a problem. AWS credentials are included in the key pair configured with the instance.
I want to now store my Data Protection keys in a S3 bucket that I created for them, so I can share the keys among all of the EC2 instances. However, when I add this service in my Startup.ConfigureServices, I get a runtime error locally:
services.AddDefaultAWSOptions(Configuration.GetAWSOptions("AWS"));
services.AddAWSService<IAmazonS3>();
services.AddDataProtection()
.SetApplicationName("Crums")
.PersistKeysToAWSSystemsManager("/CrumsWeb/DataProtection");
My app runs fine locally if I comment out the .PersistKeysToAWSSystemsManager("/CrumsWeb/DataProtection"); line above. When I uncomment the line, the error occurs. So it has something to do with that, but I can't seem to figure it out.
I was going to use PersistKeysToAwsS3 by hotchkj, but it was deprecated when AWS came out with PersistKeysToAWSSystemsManager.
The runtime error AmazonClientException: No RegionEndpoint or ServiceURL configured happens on CreateHostBuilder in my Program.cs:
I've spent many hours on this trying just to get Visual Studio 2019 to run my app locally, using suggestions from these sites:
https://aws.amazon.com/blogs/developer/configuring-aws-sdk-with-net-core/
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-netcore.html
ASP NET Core AWS No RegionEndpoint or ServiceURL configured when deployed to Heroku
No RegionEndpoint or ServiceURL configured
https://github.com/secretorange/aws-aspnetcore-environment-startup
https://www.youtube.com/watch?v=C4AyfV3Z3xs&ab_channel=AmazonWebServices
My appsettings.Development.json (and I also tried it in appsettings.json) contains:
"AWS": {
"Profile": "default",
"Region": "us-east-1",
"ProfilesLocation": "C:\\Users\\username\\.aws\\credentials"
}
And the credentials file contains:
[default]
aws_access_key_id = MY_ACCESS_KEY
aws_secret_access_key = MY_SECRET_KEY
region = us-east-1
toolkit_artifact_guid=GUID
I ended up abandoning PersistKeysToAWSSystemsManager for storing my Data Protection keys because I don't want to set up yet another AWS service just to store keys in their Systems Manager. I am already paying for an S3 account, so I chose to use the deprecated NuGet package AspNetCore.DataProtection.Aws.S3.
I use server-side encryption on the bucket I created for the keys. This is the code in Startup.cs:
services.AddDataProtection()
.SetApplicationName("AppName")
.PersistKeysToAwsS3(new AmazonS3Client(RegionEndpoint.USEast1), new S3XmlRepositoryConfig("S3BucketName")
{
KeyPrefix = "DataProtectionKeys/", // Folder in the S3 bucket for keys
});
Notice the RegionEndpoint parameter in the PersistKeysToAwsS3, which resolved the No RegionEndpoint or ServiceURL Configured error.
I added the AmazonS3FullAccess policy to the IAM role that's running in all my instances.
This gives the instance the permissions to access the S3 bucket. My local development computer also seems to be able to access the S3 bucket, although I don't know where it's getting credentials from. I tried several iterations of appsettings.json and credentials file changes to locally set region and credentials, but nothing worked. Maybe it's using credentials I entered when I set up the AWS Toolkit in Visual Studio.

ECS Fargate - No Space left on Device

I had deployed my asp.net core application on AWS Fargate and all was working fine. I am using awslogs driver and logs were correctly sent to the cloudwatch. But after few days of correctly working, I am now seeing only one kind of log as shown below:
So no application logs are showing up due to no space. If I update the ECS service, logging starts working again, suggesting that the disk has been cleaned up.
This link suggests that awslogs driver does not take up space and sends log to cloudwatch instead.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_cannot_pull_image.html
Did anyone also faced this issue and knows how to resolve the same?
You need to set the "LibraryLogFileName" parameter in your AWS Logging configuration to null.
So in the appsettings.json file of a .Net Core application, it would look like this:
"AWS.Logging": {
"Region": "eu-west-1",
"LogGroup": "my-log-group",
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
"LibraryLogFileName": null
}
It depends on how you have logging configured in your application. The AWSlogs driver is just grabbing all the output sent to the console and saving it to CloudWatch, .NET doesn't necessarily know about this and is going to keep writing logs like it would have otherwise.
Likely .NET is still writing logs to whatever location it otherwise would be.
Advice for how to troubleshoot and resolve:
First, run the application locally and check if log files are being saved anywhere
Second, optinally run a container test to see if log files are being saved there too
Make sure you have docker installed on your machine
Download the container image from ECR which fargate is running.
docker pull {Image URI from ECR}
Run this locally
Do some task you know will genereate some logs
Use docker exec -it to connect up to your container
Check if log files are being written to the location you identified when you did the first test
Finally, once you have identified that logs are being written to files somewhere pick one of these options
Add some flag which can be optionally specified to disable logging to a file. Use this when running your application inside of the container.
Implement some logic to clean up log files periodically or once they reach a certain size. (Keep in mind ECS containers have up to 20GB local storage)
Disable all file logging(not a good option in my opinion)
Best of luck!

NServiceBus endpoint is not starting on Azure Service Fabric local cluster

I have a .NetCore stateless WebAPI service running inside Service Fabric local cluster.
return Endpoint.Start(endpointConfiguration).GetAwaiter().GetResult();
When I'm trying to start NServiceBus endpoint, I'm getting this exception :
Access to the path 'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics' is denied.
How can it be solved ? VS is running under administrator.
The issue you are having is because the folder you are trying to write to is not supposed to be written by your application.
The package folder is used to store you application binaries and can be recreated dynamically whenever an application is hosted in the node.
Also, the binaries are reused by multiple service instances running on same node, so it might compete to use the files by different instances.
You should instead instruct your application to write to the WorkFolder,
public Stateless1(StatelessServiceContext context): base(context)
{
string workdir = context.CodePackageActivationContext.WorkDirectory;
}
The code above will give you a path like this:
'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics\work'
This folder is dynamic, will change depending on the node or instance of your application is running, when created, your application should already have permission to write to it.
For more info, see:
how-do-i-get-files-into-the-work-directory-of-a-stateless-service?forum=AzureServiceFabric
Open folder properties Security tab
Select ServiceFabricAllowedUsers
Add Write permission

Configuring Google cloud bucket as Airflow Log folder

We just started using Apache airflow in our project for our data pipelines .While exploring the features came to know about configuring remote folder as log destination in airflow .For that we
Created a google cloud bucket.
From Airflow UI created a new GS connection
I am not able to understand all the fields .I just created a sample GS Bucket under my project from google console and gave that project ID to this Connection.Left key file path and scopes as blank.
Then edited airflow.cfg file as follows
remote_base_log_folder = gs://my_test_bucket/
remote_log_conn_id = test_gs
After this changes restarted the web server and scheduler .But still my Dags is not writing logs to the GS bucket .I am able to see the logs which is creating logs in base_log_folder .But nothing is created in my bucket .
Is there any extra configuration needed from my side to get it working
Note: Using Airflow 1.8 .(Same issue I faced with AmazonS3 also. )
Updated on 20/09/2017
Tried the GS method attaching screenshot
Still I am not getting logs in the bucket
Thanks
Anoop R
I advise you to use a DAG to connect airflow to GCP instead of UI.
First, create a service account on GCP and download the json key.
Then execute this DAG (you can modify the scope of your access):
from airflow import DAG
from datetime import datetime
from airflow.operators.python_operator import PythonOperator
def add_gcp_connection(ds, **kwargs):
"""Add a airflow connection for GCP"""
new_conn = Connection(
conn_id='gcp_connection_id',
conn_type='google_cloud_platform',
)
scopes = [
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/cloud-platform",
]
conn_extra = {
"extra__google_cloud_platform__scope": ",".join(scopes),
"extra__google_cloud_platform__project": "<name_of_your_project>",
"extra__google_cloud_platform__key_path": '<path_to_your_json_key>'
}
conn_extra_json = json.dumps(conn_extra)
new_conn.set_extra(conn_extra_json)
session = settings.Session()
if not (session.query(Connection).filter(Connection.conn_id ==
new_conn.conn_id).first()):
session.add(new_conn)
session.commit()
else:
msg = '\n\tA connection with `conn_id`={conn_id} already exists\n'
msg = msg.format(conn_id=new_conn.conn_id)
print(msg)
dag = DAG('add_gcp_connection', start_date=datetime(2016,1,1), schedule_interval='#once')
# Task to add a connection
AddGCPCreds = PythonOperator(
dag=dag,
task_id='add_gcp_connection_python',
python_callable=add_gcp_connection,
provide_context=True)
Thanks to Yu Ishikawa for this code.
Yes, you need to provide additional information for both, S3 and GCP connection.
S3
Configuration is passed via extra field as JSON. You can provide only profile
{"profile": "xxx"}
or credentials
{"profile": "xxx", "aws_access_key_id": "xxx", "aws_secret_access_key": "xxx"}
or path to config file
{"profile": "xxx", "s3_config_file": "xxx", "s3_config_format": "xxx"}
In case of the first option, boto will try to detect your credentials.
Source code - airflow/hooks/S3_hook.py:107
GCP
You can either provide key_path and scope (see Service account credentials) or credentials will be extracted from your environment in this order:
Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to a file with stored credentials information.
Stored "well known" file associated with gcloud command line tool.
Google App Engine (production and testing)
Google Compute Engine production environment.
Source code - airflow/contrib/hooks/gcp_api_base_hook.py:68
The reason for logs not being written to your bucket could be related to service account rather than config on airflow itself. Make sure it has access to the mentioned bucket. I had same problems in the past.
Adding more generous permissions to the service account, e.g. even project wide Editor and then narrowing it down. You could also try using gs client with that key and see if you can write to the bucket.
For me personally this scope works fine for writing logs: "https://www.googleapis.com/auth/cloud-platform"