Error: Rollup failed to resolve import "vue-router" when vite build in bitbucket - vue.js

Error: Rollup failed to resolve import "vue-router" when vite build in bitbucket
bitbucket-pipelines.yml
image: node:16
pipelines:
branches:
master:
- step:
name: install
caches:
- node
script:
- npm i
- step:
name: build
caches:
- node
script:
- npm run build
artifacts: # defining the artifacts to be passed to each future step
- dist/**
- step:
name: Deploy to Serve
deployment: Production
script:
- pipe: atlassian/sftp-deploy:0.5.5
variables:
USER: $FTP_USERNAME
SERVER: $FTP_HOST
REMOTE_PATH: $FTP_SITE_ROOT
PASSWORD: $SFTP_PASSWORD
LOCAL_PATH: $BITBUCKET_CLONE_DIR/dist/*
DEBUG: 'true'
error informations:
vite]: Rollup failed to resolve import "vue-router" from "src/router/index.js".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
build.rollupOptions.external
error during build:
Error: [vite]: Rollup failed to resolve import "vue-router" from "src/router/index.js".
This is most likely unintended because it can break your application at runtime.
If you do want to externalize this module explicitly add it to
build.rollupOptions.external
at onRollupWarning (/opt/atlassian/pipelines/agent/build/node_modules/vite/dist/node/chunks/dep-e0fe87f8.js:43253:19)
at onwarn (/opt/atlassian/pipelines/agent/build/node_modules/vite/dist/node/chunks/dep-e0fe87f8.js:43037:13)
at Object.onwarn (/opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:23003:13)
at ModuleLoader.handleResolveId (/opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:22347:26)
at /opt/atlassian/pipelines/agent/build/node_modules/rollup/dist/shared/rollup.js:22319:26

Related

Dotnet Sonarcloud end failed

I trying to integrate my Solution based on .net6 with SonarCloud and Github actions.The problem is that the action build failed on the sonar scanner end.I tried to change working dirs but with the same effect.The project is public HERE
The SonarScanner for MSBuild integration failed: SonarCloud was unable to collect the required information about your projects.
Possible causes:
The project has not been built - the project must be built in between the begin and end steps
An unsupported version of MSBuild has been used to build the project. Currently MSBuild 14.0.25420.1 and higher are supported.
The begin, build and end steps have not all been launched from the same folder
None of the analyzed projects have a valid ProjectGuid and you have not used a solution (.sln) SonarScanner for MSBuild 5.7.2 Using
the .NET Core version of the Scanner for MSBuild Post-processing
started. 10:38:06.016 Generation of the sonar-properties file failed.
Unable to complete the analysis. 10:38:06.024 Post-processing failed.
Exit code: 1 Error: Process completed with exit code 1.
name: build-all
# Controls when the action will run.
on:
push:
branches:
- main
env:
DOTNET_VERSION: 6.0.x
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build-windows:
# The type of runner that the job will run on
runs-on: windows-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
with:
# Disabling shallow clone is recommended for improving relevancy of reporting
fetch-depth: 0
- uses: actions/setup-dotnet#v1
with:
dotnet-version: ${{ env.DOTNET_VERSION }}
- uses: microsoft/setup-msbuild#v1
- uses: actions/setup-java#v2
with:
distribution: 'adopt'
java-version: '11'
- name: Restore NuGet packages
run: |
cd App
nuget restore App.sln
- name: Begin Sonar scan
run: |
cd App
dotnet tool install --global dotnet-sonarscanner
dotnet sonarscanner begin /o:vladimirpetukhov /k:vladimirpetukhov_Musement_CLI /d:sonar.login=${{ secrets.SONAR_TOKEN }} /d:sonar.host.url=https://sonarcloud.io
- name: Build Api
run: |
cd ./App/App.API
dotnet build App.API.csproj --no-restore
# dotnet test App.API.csproj --no-build --no-restore --verbosity normal -p:CollectCoverage=true -p:CoverletOutputFormat=opencover
- name: Build Main
run: |
cd ./App/App.Main
dotnet build App.Main.csproj --no-restore
# dotnet test App.Main.csproj --no-build --no-restore --verbosity normal -p:CollectCoverage=true -p:CoverletOutputFormat=opencover
- name: End Sonar scan
run: |
cd App
dotnet sonarscanner end /d:sonar.login=${{ secrets.SONAR_TOKEN }}

Serverless python packaging numpy dependency error

I have been running into issues when making function calls from my deployed Python3.7 Lambda function that, from the error message, are related to numpy. The issue states that there is an inability to import the package and despite trying many of the solutions I have read about, I haven't had any success and I am wondering what to test out next or how to debug further.
I have tried the following:
Install Docker, add the plugin serverless-python-requirements, configure in yml
Install packages in app directory to be bundled and deployed, pip install -t src/vendor -r requirements.txt --no-cache-dir
Uninstalled setuptools and numpy and reinstalled in that order
Error Message (Displayed after running sls invoke -f auth):
{
"errorMessage": "Unable to import module 'data': Unable to import required dependencies:\nnumpy: \n\nIMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\n\nImporting the numpy c-extensions failed.\n- Try uninstalling and reinstalling numpy.\n- If you have already done that, then:\n 1. Check that you expected to use Python3.7 from \"/var/lang/bin/python3.7\",\n and that you have no directories in your PATH or PYTHONPATH that can\n interfere with the Python and numpy version \"1.18.1\" you're trying to use.\n 2. If (1) looks fine, you can open a new issue at\n https://github.com/numpy/numpy/issues. Please include details on:\n - how you installed Python\n - how you installed numpy\n - your operating system\n - whether or not you have multiple versions of Python installed\n - if you built from source, your compiler versions and ideally a build log\n\n- If you're working with a numpy git repository, try `git clean -xdf`\n (removes all files not under version control) and rebuild numpy.\n\nNote: this error has many possible causes, so please don't comment on\nan existing issue about this - open a new one instead.\n\nOriginal error was: No module named 'numpy.core._multiarray_umath'\n",
"errorType": "Runtime.ImportModuleError"
}
Provided is my setup:
OS: Mac OS X
Local Python: /Users/me/miniconda3/bin/python
Local Python version: Python 3.7.4
Serverless Environment Information (Runtime = Python3.7):
Operating System: darwin
Node Version: 12.14.0
Framework Version: 1.67.3
Plugin Version: 3.6.6
SDK Version: 2.3.0
Components Version: 2.29.1
Docker:
Docker version 19.03.13, build 4484c46d9d
serverless.yml:
service: understand-your-sleep-api
plugins:
- serverless-python-requirements
- serverless-offline-python
custom:
pythonRequirements:
dockerizePip: true # non-linux
slim: true
useStaticCache: false
useDownloadCache: false
invalidateCaches: true
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- ssm:GetParameter
Resource: "arn:aws:ssm:us-east-1:*id*:parameter/*"
environment:
STAGE: ${self:provider.stage}
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
package:
exclude:
- env.yml
- node_modules/**
requirements.txt:
pandas==1.0.0
fitbit==0.3.1
oauthlib==3.1.0
requests==2.22.0
requests-oauthlib==1.3.0
data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
import json
from datetime import timedelta, datetime, date
import math
import pandas as pd
from requests_oauthlib import OAuth2Session
from urllib.parse import urlparse, parse_qs
import fitbit
import requests
import webbrowser
import base64
import os
import logging
def auth(event, context):
...
Use lambda layer to pack all your requirements, Make sure you have numpy in requirements.txt file. Try it once.
This works only when serverless-python-requirements plugin is listed in plugins section.
Replace your custom key with this and give the functions a reference to use that layer
custom:
pythonRequirements:
layer: true
functions:
auth:
handler: data.auth
events:
- http:
path: /auth
method: get
cors: true
layers:
- { Ref: PythonRequirementsLambdaLayer}
I checked with zipinfo .requirements.zip and found that macos dynlib where loaded instead of linux so files
I fixed this by using dockerizePip: non-linux
be aware that this will not be triggered if in the working dir a .requirements.zip already exists so
git clean -xfd before running sls deploy
Since you are using serverless-python-requirements plugin, it will package the libraries for you. In order words, you don't need to do pip install -t src/vendor -r requirements.txt --no-cache-dir all that stuff manually.
To solve you problem, remove src/vendor and the following two lines in data.py:
import sys
sys.path.insert(0, 'src/vendor') # Location of packages that follow
Then sit back, and let serverless-python-requirements do the work for you.

Restassured Test Quarkus not running withing Docker Container

I have a normal Quarkus Restassured Test working very fine locally on my workstation:
#Test
public void testHelloEndpoint() {
given()
.when().get("/ifc")
.then()
.statusCode(200)
.body(containsString("hello"));
}
however, when I run this on Gitlab CI withing a docker container from image image: maven:3.6.3-jdk-11 it hangs. I suppose, the Tests wants to connect to locahost:8081 internally of the container, which does not work.
How to solve this?
gitlab-ci:
image: maven:3.6.3-jdk-11
variables:
MAVEN_CLI_OPTS: "-s m2-settings.xml --batch-mode"
#MAVEN_CLI_OPTS: ""
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
#- .m2/repository/
#- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- java --version
- mvn $MAVEN_CLI_OPTS install
deploy:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS deploy
only:
- master
when: manual
When I run the same docker image locally (not gitlab) I see the following errors:
[ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 61.157 s <<< FAILURE! - in ch.siemens.bt.ifc.ResourceTest
[ERROR] testGenerateEndpoint Time elapsed: 0.073 s <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class org.codehaus.groovy.reflection.ReflectionCache
[ERROR] testTransformEndpoint Time elapsed: 0.001 s <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class io.quarkus.test.common.RestAssuredURLManager
[ERROR] testHelloEndpoint Time elapsed: 0.001 s <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class io.quarkus.test.common.RestAssuredURLManager
I have realised that using the image quay.io/quarkus/centos-quarkus-maven:20.0.0-java11 works. I don't see why - because I am not doing native builds.

Travis pr failed, push passed

The branch was previously functional, then merged to master and the builds on master failed. Master was reverted, then master was merged into this branch and some fixes were made. When attempting to merge back to master, the build failed again with the following error. The push passed, the pr failed.
* What went wrong:
Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not find com.squareup.leakcanary:leakcanary-android:1.5.4.
The travis.yml file:
sudo: false
language: android
android:
components:
- build-tools-27.0.2
- android-27
- sys-img-armeabi-v7a-android-27
jdk:
- oraclejdk8
before_install:
- yes | sdkmanager "platforms;android-27"
- chmod +x gradlew
#First app is built then unit tests are run
jobs:
include:
- stage: build
async: true
script: ./gradlew assemble
- stage: test
async: true
script: ./gradlew -w runUnitTests
notifications:
email:
recipients:
- email#me.com
on_success: always # default: change
on_failure: always # default: always
I felt maven repo outage today and faced the same issue. Hours later, I found that the failed Travis Job is working fine now. Do check it at your side.
Also, For any given scenario when classpath dependencies are missing one should check the build.gradle file rather than the .travis.yml file.
The failure message says that the app:debugCompileClasspath task is failing when looking for the com.squareup.leakcanary:leakcanary-android:1.5.4 (jar or AAR). Gradle allows you to define the repositories at the the root level
allProjects{
repositories {
maven() //Gradle has definition the points to https://jcenter.bintray.com/
}
}
So it will look into the following places for the class files or jar file.
Name: $ANDROID_HOME/extras/m2repository; url: file:/$ANDROID_HOME/extras/m2repository/
Name: $ANDROID_HOME/extras/google/m2repository; url: $ANDROID_HOME/extras/google/m2repository/
Name: $ANDROID_HOME/extras/android/m2repository; url: file:$ANDROID_HOME/extras/android/m2repository/
Name: BintrayJCenter; url: https://jcenter.bintray.com/
If not found the dependency resolution will fail giving the error mentioned above.

How to get a slack notification when build fails?

When my build is successful I get a slack notification, when it fails I do not. Looking at the Drone web UI it looks like it stops once the build fails and the slack plugin is never run.
A successful build results in notify happening:
A failed build does not get to the notify stage:
The key parts of the .drone.yml are as follows:
build:
image: propheris/ruby:2.4.0
secrets: [gems_password]
commands:
- exit 0
notify:
image: plugins/slack
webhook: https://example.com/hooks/token
channel: dev
username: drone
icon_emoji: drone
I change exit 0 or exit 1 to simulate a successful or failed build.
Drone 0.7
plugin/slack
I've taken a look at the docs and it seems your missing the following line:
when:
status: [ success, failure ]
The docs state:
Example configuration for success and failure messages:
pipeline:
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: dev
when:
status: [ success, failure ]
You can also add custom messages:
Example configuration with a custom message template:
pipeline:
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: dev
template: >
{{#success build.status}}
build {{build.number}} succeeded. Good job.
{{else}}
build {{build.number}} failed. Fix me please.
{{/success}}