Hi im trying to import an automaiton account module to an Automatino account that i have Using Bicep
the Module
Az.Storage
version : 2.0.0
Following the Documentation im using this code :
resource znssPSModulesAzStorageName 'Microsoft.Automation/automationAccounts/modules#2020-01-13-preview' = {
name: psModules.azStorage.name
location: location
parent: znssAutomationAccountName
tags:{}
properties: {
contentLink:{
uri: 'https://www.powershellgallery.com/api/v2/packages/Az.Storage/2.0.0'
}
}
}
but im getting this error :
Error importing the module Az.Storage. Import failed with the
following error:
Orchestrator.Shared.AsyncModuleImport.ModuleImportException: No
content was read from the supplied ContentLink.
[ContentLink.Uri=https://www.powershellgallery.com/api/v2/packages/Az.Storage/2.0.0]
at
Orchestrator.Activities.GetModuleContentActivity.ExecuteInternal(CodeActivityContext
context, String contentUri, String contentHashAlgorithm, String
contentHashValue, String contentVersion, String moduleName,
ModuleLanguage moduleLanguage) at
Orchestrator.Activities.GetModuleContentActivity.Execute(CodeActivityContext
context) at
System.Activities.CodeActivity.InternalExecute(ActivityInstance
instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
at
System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor
executor, BookmarkManager bookmarkManager, Location resultLocation)
is there a problem with the link in powershell Gallery
i searched in the internet and couldn't find anything useful
hope someone can help me
the Problem is in the uri instead of :
uri: 'https://www.powershellgallery.com/api/v2/packages/Az.Storage/2.0.0
use : package singular wihout the s
uri: 'https://www.powershellgallery.com/api/v2/package/Az.Storage/2.0.0
it should work :
resource znssPSModulesAzStorageName 'Microsoft.Automation/automationAccounts/modules#2020-01-13-preview' = {
name: psModules.azStorage.name
location: location
parent: znssAutomationAccountName
tags:{}
properties: {
contentLink:{
uri: 'https://www.powershellgallery.com/api/v2/package/Az.Storage/2.0.0'
}
}
}
Related
I'm performing dusk test with lighthouse inside it like:
class ExampleTest extends DuskTestCase
{
use DatabaseMigrations;
public function testExample()
{
//this is a mutation for adding more stuff
$this->graphQL('mutation ...')
$this->browse(function (Browser $browser) use ($url) {
$browser->visit($url)
//asserts...
});
}
}
And on my mutations, the error message is:
"""Lighthouse failed while trying to load a type: typeFromMySchemaExample \n
\n
Make sure the type is present in your schema definition.\n
"""
I've already seen that schema is valid by:
php artisan lighthouse:validate-schema
And check the schema by it self to see if that type is present with:
php artisan lighthouse:print-schema
And cleared all configs/cache from laravel and lighthouse as in solution#1 with no success.
In my composer, I have:
laravel/dusk in v6.12.0
nuwave/lighthouse in v5.2.0
phpunit/phpunit in v9.5.2
ps:I commented that type in the graphql and the error keeps going by my import order in the schema.graphql.
When adding a response schema to a fastify resource that leverages the $merge keyword, an error
FST_ERR_SCH_BUILD: Failed building the schema for GET: /, due error undefined unsupported
is thrown.
Schema looks like the following, but the same error is thrown using the examples from ajv or fastify.
response: {
200: {
$merge: {
source: {
type: 'object',
properties: {
foo: { type: 'string' }
}
},
with: {
type: 'object',
properties: {
bar: { type: 'string' }
}
}
}
}
}
workaround described in own answer
I have found a workaround for this:
it seems that unlike when using $merge in any other schema, either fastify or ajv require the type keyword to be present on $merge level.
This might be a bug, as it can be deduced from the merged objects and the methodology works when using $merge for other schemas.
The serializer doesn't implement the ajv's schema customization (as it is $merge). Under the hood fast-json-stringify is used by default.
You should use standard JSON schema and its combining keywords.
In fastify v2 the serializer that uses the schemas is not customizable, so you should write your own serializer and set up it using setReplySerializer.
I'm trying to upload a couple of xml files to firestore from my computer (1000 or so). The users in my android app need to be able to search for these files, and because firebase storage doesn't provide that feature I thought I'd just dump them in firestore.
To test it I'm trying to upload 10 files for a start. The code I have looks something like this:
fun main(){
val db = getDatabase()
getXMLFiles().take(10).forEach { file ->
db.collection("xml").add(XMLFile(file.name, file))
}
}
private fun getDatabase() : Firestore {
val serviceAccount = FileInputStream("<project name>-firebase-adminsdk-<some other stuff>.json")
val firestoreOptions = FirestoreOptions.newBuilder()
.setTimestampsInSnapshotsEnabled(true).build()
val options = FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("https://<project name>.firebaseio.com")
.setFirestoreOptions(firestoreOptions)
.build()
FirebaseApp.initializeApp(options)
return FirestoreClient.getFirestore()
}
fun getXMLFiles() = File("/XML").walk().asIterable()
data class XMLFile(val name: String, val file: File)
Nothing gets uploaded and half the time (weirdly inconsistently) I get this error message:
Apr 15, 2019 2:05:53 PM com.google.auth.oauth2.ComputeEngineCredentials runningOnComputeEngine
WARNING: Failed to detect whether we are running on Google Compute Engine.
java.net.ConnectException: No route to host (connect failed)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)
at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1242)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1181)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1075)
at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1009)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:104)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.auth.oauth2.ComputeEngineCredentials.runningOnComputeEngine(ComputeEngineCredentials.java:210)
at com.google.auth.oauth2.DefaultCredentialsProvider.tryGetComputeCredentials(DefaultCredentialsProvider.java:290)
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentialsUnsynchronized(DefaultCredentialsProvider.java:207)
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:124)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:127)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:100)
at com.google.cloud.ServiceOptions.defaultCredentials(ServiceOptions.java:304)
at com.google.cloud.ServiceOptions.<init>(ServiceOptions.java:278)
at com.google.cloud.firestore.FirestoreOptions.<init>(FirestoreOptions.java:225)
at com.google.cloud.firestore.FirestoreOptions$Builder.build(FirestoreOptions.java:219)
at UploaderKt.getDatabase(Uploader.kt:32)
at UploaderKt.main(Uploader.kt:13)
at UploaderKt.main(Uploader.kt)
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Process finished with exit code 0
I checked, and it does find the .json with the credentials, so that is not the problem. I can't find anything anywhere about the error message
Apr 15, 2019 2:05:53 PM com.google.auth.oauth2.ComputeEngineCredentials runningOnComputeEngine
WARNING: Failed to detect whether we are running on Google Compute Engine.
So I hoped someone could help.
Btw, without showing to much, my .json looks like this:
{
"type": "service_account",
"project_id": "XXX",
"private_key_id": "XXX",
"private_key": "-----BEGIN PRIVATE KEY-----XXX-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-XXX#XXX.iam.gserviceaccount.com",
"client_id": "XXX",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-XXX%40XXX.iam.gserviceaccount.com"
}
Edit: might be stating the obvious but I have no problems connecting to the internet otherwise
I have setup a sailsjs project and trying to access rabbitmq using sails-rabbitmq adapter. I have followed https://www.npmjs.com/package/sails-rabbitmq .
I want to use mongodb with rabbitmq. problem is when i 'sails lift' i get this error.
error: A hook (orm) failed to load!
error: Error: One of your models (message) refers to multiple datastores.
Please set its configured datastore to a string instead of an array in its model definition (.connection) or the app-wide default (sails.config.models.connection)
(this is conventionally set in your config/models.js file, or as part of your app's environment-specific config).
at constructError (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\lib\construct-error.js:57:13)
at validateModelDef (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\lib\validate-model-def.js:97:11)
at C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\lib\initialize.js:218:36
at arrayEach (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\lodash\index.js:1289:13)
at Function.<anonymous> (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\lodash\index.js:3345:13)
at Array.async.auto._normalizeModelDefs (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\lib\initialize.js:216:11)
at listener (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\lib\async.js:605:42)
at C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\lib\async.js:544:17
at _arrayEach (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\lib\async.js:85:13)
at Immediate.taskComplete (C:\Users\demoapp\AppData\Roaming\npm\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\lib\async.js:543:13)
at processImmediate [as _immediateCallback] (timers.js:383:17)
I have > connection: [ 'rabbitCluster', 'regularMongo' ]
in my Message model. regularMongo is mongodb connection. Please let me know what other configuration i am missing.
With following config I do not see any error. In sails.config.models set
module.exports.models = {
connection: 'someMongodbServer',
migrate: 'safe'
};
in Message.js set
module.exports = {
connection: [ 'rabbitCluster', 'someMongodbServer' ],
routingKey: [ 'parentMessage' ],
attributes: {
title: 'string',
body: 'string',
parentMessage: {
model: 'message'
}
}
};
when I try to connect the puppet agent with puppet agent --test, I have this error :
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER :Could not find class <my_module> for <my_agent> on node <my_agent>
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I have import nodes on sites.pp and include <my_module> on nodes.pp
--edit--
Content of sites.pp :
import "nodes"
filebucket { main: server => "<my_master>" }
File { backup => main }
Exec { path => "/usr/bin:/usr/sbon:/bin:/sbin" }
Content of nodes.pp :
node "<my_agent>" {
include <my_module>
}
--edit--
What is the real problem ?
Thanks
I have created another VM, and that's working now ! =)
Maybe I have taken a mistake in the network configuration.