Building .NET Core 2.1 app fails in Google Cloud Shell - asp.net-core

I've created an ASP.NET Core 2.1 app that I'm trying to deploy to Google Cloud Platform. It builds just fine using dotnet build locally.
I cannot build it through Google Cloud Shell, though. Running dotnet --version confirms the Google Cloud Shell has .NET Core 2.0 installed.
Running gcloud app deploy initiates a deployment of the app, but I receive a cryptic error from the log saying:
Step #0: Status: Downloaded newer image for gcr.io/gcp-runtimes/aspnetcorebuild#sha256:ad1b788a9b4cca75bb61eb523ef358ca059c4bd208fba15a10ccc6acab83f49a
Step #0: No .deps.json file found for the app
Finished Step #0
ERROR: build step 0 "gcr.io/gcp-runtimes/aspnetcorebuild#sha256:ad1b788a9b4cca75bb61eb523ef358ca059c4bd208fba15a10ccc6acab83f49a" failed: exit status 1
I was under the impression that GCP supports .NET Core 2.1 containers by default, so I haven't included a Dockerfile.
I'm trying to deploy to the flexible environment, here's my app.yaml file:
runtime: aspnetcore
env: flex
Do I need to create a custom Docker container? Or is there some other way to get support for .NET Core 2.1 in Google Cloud Shell?
Edit: For now I've installed Google Cloud Tools to run gcloud app deploy in a shell locally after running dotnet publish.

I tried to reproduce it - indeed, Cloud Shell supports .NET Core 2.0 only. I've raised that with the right engineers so Cloud Shell image is updated to support .NET Core 2.1.
In the meantime:
Create a docker file based off of this image:
gcr.io/google-appengine/aspnetcore:2.1
Try using gcloud builds submit to build an image. When you run gcloud builds submit for the first time, it will ask to enable Cloud Build API. Approve that. This will not use local (i. e. CloudShell's) docker build command but rather submit your artifacts to be built by Cloud Build and pushed to container registry.
Deploy to AppEngine Flex with gcloud app deploy, specifying --image-url with the image address from container registry as you built it in previous step.

Remember that Microsoft switch from aspnetcore to dotnet on the dotnet core image.
https://hub.docker.com/r/microsoft/dotnet/

Related

ffmpeg was not found on your system in Azure service fabric application

I have a stateless service fabric application, which uses ffmpeg.exe to convert video files. ffmpeg.exe is added to the project and it's properties set to Content & Copy always. When I install the app on Azure VM (Service Fabric 5 node cluster), it has been deployed to D:\SvcFab\_App\Sample_App1\Sample.Code.1.0.0. (D drive is temp storage on Azure VM). When ever I try to convert a video file, I am getting ffmpeg.exe was not found on your system exception. I am able to convert files in development environment and on on-prem server without any exception.
I tried to access ffmpeg.exe using Path.Combine(Directory.GetCurrentDirectory(), “ffmpeg.exe”) and Path.Combine(FabricRuntime.GetActivationContext().GetCodePackageObject(“Code”).Path, “ffmpeg.exe”)
ffmpeg won't work unless you have it preinstalled on the vm you are using.
Once installed it will work without even needing to place the ffmpeg.exe with the source code.
If you are using a Linux VM then installation can be done using package manager.
Here I have yum as a package manager.
yum install ffmpeg
if you are using a windows VM then you have to first upload the entire folder which you have downloaded from the ffmpeg website and then you have to set the address of the bin folder to the path environment variable.
This can be achieved in command line using the following command
$Env:PATH += ";<Path to you bin folder>"

testContainers and Rancher

I have a Spring Boot application with integration tests that are using testContainers.
Till recently, I used Docker Desktop and was able to easily run the test from within Intellij or from CLI.
Recently I changed my Windows machine to Rancher desktop.
Now when trying to run the integration tests gradle integrationTest I'm getting this error:
Caused by: java.lang.IllegalStateException: Previous attempts to find a Docker environment failed. Will not retry. Please see logs and check configuration
at org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:109)
at org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:136)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:178)
at org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)
at org.testcontainers.LazyDockerClient.authConfig(LazyDockerClient.java:12)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:310)
Is there an additional configuration that is needed in Intellij or Rancher or Windows to make it work?
UPDATE Feb 2022: As reported here TestContainers works nicely with Rancher Desktop 1.0.1.
Based on the following two closed issues - first, second - in the testcontainers-java github, Testcontainers doesn't seem to support Rancher Desktop, at least officially.
I'm running rancher desktop version 1.0.0 on my windows mashine and could get testcontainers to work just by simple adding 'checks.disable=true' in .testcontainers.properties (located under C:\Users<your user>)
updating Rancher Desktop to version 1.0.1 fixed this issue for me
I got this error because my Rancher was using containerd. If you also use Rancher Desktop try to switch to dockerd under settings, but first back up the data you have just in case.

ADF pipelines not able to run on self hosted integration runtime

We have Azure Data Factory V2 pipeline, consuming data from on-prem SQL Server, Pipelines was running perfectly fine until 2 weeks. 2 weeks back it started running very slow (15min vs 2h:15min).
So today we tried to restart the machine which installed IR. After that pipeline started giving an error:
Unable to load DLL 'jvm.dll': The specified module could not be found.
We have verify everything mentioned in post.
Then we have reinstalled the Integration Runtime on the machine and now pipelines keep running without transferring data. All the pipelines lies in the queue. There is no activity we can see on IR Monitor. Pipelines are not sending any request to IR.
Sharing the answer as per the comment by the original poster:
Able to resolve the issue by reinstalling JRE and integration runtime.
Try a clean install. Uninstall the IR and the JRE, then install JRE, then IR.

Try to use System.Diagnostics.Process.Start() in Raspberry Pi

I have a asp.net core 2.1 project that will create and execute a process in PI. Project is running fine, but it throw exception:"System.ComponentModel.Win32Exception (8): Exec format error" when try to start process.
I had try publish -r ubuntu.16.04-arm, but still not working.
Is netcore 2.1 support to create process in ARM yet?

Server not starting in IBM API connect toolkit

I have created API's in API connect toolkit. For testing the API locally in Explore tab, I am trying to start server.
But getting "Error: It appears that Docker for Windows has not been installed. To install Docker for Windows, please visit https://docs.docker.com/docker-for-windows/install/ For more information, check the docs" error.
From my understanding you don't need docker to test locally in toolkit. Any suggestions to fix the issue?
More Info: APIC version: API Connect: v5.0.8.3 (apiconnect: v2.7.209) NPM version : 6.1.0
It used to work before suddenly I am getting the above error. I tried re-installing but issue persist.
With new versions of APIC you must have Docker installed and working properly on your Windows environment to be able to install the API Connect Toolkit with DataPower.
Please find the steps to install the APIC toolkit on these pages:
https://www.ibm.com/support/knowledgecenter/en/SSFS6T/com.ibm.apic.toolkit.doc/tapim_cli_install.html
https://www.ibm.com/support/knowledgecenter/en/SSFS6T/com.ibm.apic.toolkit.doc/tapim_apic_test_with_dpdockergateway.html