I am trying to pulish an Azure Data Factory pipeline, however I'm getting the error:
Error The document creation or update failed because of invalid
reference 'master'. Please ensure 'master' exists in data factory mode
and recreate it in Git mode if already present.
I am familiar with the error. However, I'm can't find reference 'master'. Can someone let me know how to go about tracking down the reference 'master'?
Thanks
This issue is commonly caused by a mismatch between Data Factory mode and Git mode. It may happen when Git is first configured, or when changes are added directly in Git or in Live mode.
If you are unable to find and fix the conflict manually, you may re-sync the content in the Git Configuration page, by using either:
Overwrite Live Mode (which I recommend): Makes Data Factory mode (published) version match Git.
Import Resources: Makes a Git branch match Data Factory mode.
Git configuration page
Please be advised that overwriting live mode may result in losing changes not currently in Git. You may use Import Resources to persist changes prior to this.
Related
When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)
This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.
In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.
I have done the changes in the Azure data factory pipeline. Selected the "VSTS Git" option and trying to publish the changes it was working fine but now started getting below error message:
Error while publishing: Cannot read property 'constructor' of undefined
Retried by removing the changes but still getting same issue.
There could be a broken resource in your collaberation branch which is causing the issue.
You can re-sync Git and Live mode by disconnecting and reconnecting Git.
Please note that you would lose the differences between the two.
Hope this will help.
I'm using Gitlab CI on one of my projects and I face the following problem :
My master build fails since a lot of time...
I push a new branch built from the master (no new commits) and push it, the build works.
I think that it's related to build cache because the codebase is strictly the same... The latest valid build cache may make the current code base failed...
Is there a way to clean the build cache on a specific branch ? In my case the master ? From the API ?
Finally, the Gitlab Team gave me the solution on Twitter : https://twitter.com/gitlab/status/832674380790394880
Since my repository is hosted on gitlab.com, I can't remove the cache by myself. But on the .gitlab-ci.yml file documentation, it's explained that we can use a cache:key entry.
This cache:key is used to determine how the cache entry is named so I can change the default value to start on a blank cache 😊.
Below a sample of my .gitlab-ci.yml file :
my-asset-build:
cache:
key: "$CI_COMMIT_REF_NAME-assets"
With that configuration, my cache is related to the current ref (so a build on the same ref will use the cache) with a suffix !
Thanks to the Gitlab Team for their quick answer on Twitter !
If you have trouble with the variable name, maybe you need to check this page : https://docs.gitlab.com/ce/ci/variables/README.html#9-0-renaming
Also, since Gitlab 10.4, we have a "Clear runner cache" button in the pipeline list. Clicking on that button will have the same effect than changing variable name without polluting commit history.
while building giraph jar with dependencies we are getting following warnings.. really not sure how to resolve these.. we already tried
useProjectArtifact as false
and
unpack as true
both dosent seem to work
any suggestion how to resolve these...??
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory
This looks like a connection problem, proxy or firewall, so you can contour this solutions:
Explicit refer the ASM dependency. Take a look at the correct version and try to add it into your pom (http://mvnrepository.com/artifact/asm/asm). After that, execute mvn install to ensure that's everything ok.
If it doesn't work, you can try to manually download the dependency and copy it inside your local repository (local folder ".m2"), probably at "/.m2/asm/asm/". Isn't the best solution, but perhaps this can solve your problem.
Hope it helps!