Azure PS command returns empty list - virtual-machine

We have created a resource group with a VMSS instance within.
When running the "az vmss list" command within the Cloud Shell, we are able to get the respective resource.
However, when running the exact same command in PS, it returns an empty list, even though the resource exists.
We have also created a different naming convention and used the ARM Api call to test this and we got the same outcome. See the command below:
Invoke-RestMethod -Uri https://management.azure.com/subscriptions/***/resourceGroups/***-nodepool-prd-devtest/providers/Microsoft.Compute/virtualMachineScaleSets?api-version=2020-12-01 -Method GET -Headers $headers -ContentType 'application/json'
Output:
value
-----
{}
We have tried to add another VMSS to the same resource group as a test and noticed that made the other one pop up.
It looks to me as if it is a caching issue but I can't figure out how to resolve it. Any ideas?
Things to keep in mind:
using the same CLI version as Cloud Shell (2.23.0)
running the command from the pipeline returns an empty list
a team member is able to get the resource from their own machine

I saw the similar issue when a specific subscription wasn't show up in a list of output.
We used below ps commands to resolve the issue:
Disable-AzureRmContextAutosave
Disconnect-AzureRmAccount
As you use the Az module, please try these:
Disable-AzContextAutosave
Disconnect-AzAccount
Source: https://github.com/Azure/azure-powershell/issues/6289
PS: it seems you need to edit the title to the "Azure PS command returns empty list"

Related

How to use Subnetid parameter with New-AzContainerGroup Powershell Cmdlet?

I am testing creating an azure container instance group via an automation job which works, but in order to meet my requirements I need it to be attached to one of my configured VNETS, rather than using a public IP.
According to the documentation I need to create a hash table for the subnet ID, which I've attempted to do here, in the $hash variable
Name Value
---- -----
ManagedServices /subscriptions/{My-subscriptionID}/resourceGroups/{My-Resource-Group}/providers/Microsoft.Network/virtualNetworks/{My-VNET}/subnets/{My-Subnet}
However, whenever I run the command attempting to pass the $hash variable in, I receive the following error
New-AzContainerGroup : A parameter cannot be found that matches parameter name 'SubnetId'.
Here is the exact command that I am using
New-AzContainerGroup -ResourceGroupName Dev-Test -Name mycontainer -Image najarramsada/phpipamscanagent -OsType Linux -DnsNameLabel phpipamtest -SubnetId $hash
I am expecting this to run and create a container in my chosen subnet but instead I am getting the error indicating that the parameter doesn't exist. I am also not certain I created my hash table correctly as I've never used them before.
The error message you are receiving indicates that you have spelled the parameter incorrectly.
New-AzContainerGroup : A parameter cannot be found that matches parameter name 'SubbnetId'.
New-AzContainerGroup -ResourceGroupName Dev-Test -Name mycontainer -Image najarramsada/phpipamscanagent -OsType Linux -DnsNameLabel phpipamtest -SubnetId $hash
When creating your hashtable it looks as though you may need to use the following:
$hash = #{
Id = 'resourceId'
Name = 'FriendlyName'
}
Taken from the NOTES section here:
https://learn.microsoft.com/en-us/powershell/module/az.containerinstance/new-azcontainergroup?view=azps-7.3.2#notes

Compilation of contract with include

I’m currently struggling to compile a contract (on aeternity's Sophia language) with include of a custom library “Library.aes” which resides in a separate file at the same level of the filesystem as the using contract.
The library looks like
namespace Library =
type number = int
function inc(x : number) : number = x + 1
The contract is using it like this
include "Library.aes"
When I compile (locally using compiler node) the contract, I always get
"Couldn't find include file 'Library.aes'\n"
also tried to pass the full path to the include, same result.
Is there a need to define the attribute options.file_system somehow?
let’s use the same example:
~/Quviq/Aeternity/aesophia_http [git:master]: FOO="include \\\"Bar.aes\\\"\\n\\ncontract Foo =\\n entrypoint foo() = Bar.bar()"
~/Quviq/Aeternity/aesophia_http [git:master]: BAR="namespace Bar =\\n function bar() = 42"
~/Quviq/Aeternity/aesophia_http [git:master]: curl -H "Content-Type: application/json" -d "{\"code\":\"$FOO\",\"options\":{\"backend\":\"fate\",\"file_system\":{\"Bar.aes\":\"$BAR\"}}}" -X POST http://localhost:3080/compile
{"bytecode":"cb_+IJGA6AANCB3UsSiP2HGHRML0dG95vNT9JsqZQMjPYAfEG1w6cC4Va3+RNZEHwA3ADcAGg6CPwEDP/5sbA2iAjcABwEDVP64/p9/ADcABwQDEWxsDaKjLwMRRNZEHxFpbml0EWxsDaIhLkJhci5iYXIRuP6ffw1mb2+CLwCFNC4yLjAAfreb3w=="}
Beware the quoting of strings, but apart from that it is rather straightforward.
This post is pretty old but thought on responding anyway.
Did you tried this by using aeproject?
https://github.com/aeternity/aepp-aeproject-js
Try to put your contract code under the file structure and use the deploy script so you can work it out locally first and the deploy to testnet or mainnet.
If you have Erlang installed you may use aesophia_cli instead of hosting a local node. Then it should search for include files in the same directory as your main .aes file.

Making a Google BigQuery from Python on Windows

I am trying to do something which is very simple in other data services. I am trying to make a relatively simple SQL query and return it as a dataframe in python. I am on Windows 10 and using Phython 2.7 (specifically Canopy 1.7.4)
Typically this would be done with pandas.read_sql_query but due to some specifics with BigQuery they require a different method pandas.io.gbq.read_gbq
This method works fine unless you want to make a Big Query. If you make a Big Query on BigQuery you get the error
GenericGBQException: Reason: responseTooLarge, Message: Response too large to return. Consider setting allowLargeResults to true in your job configuration. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
This was asked and answered before in this ticket but neither of the solutions are relevant for my case
Python BigQuery allowLargeResults with pandas.io.gbq
One solution is for python 3 so it is a nonstarter. The other is giving an error due to me being unable to set my credentials as an environment variable in windows.
ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I was able to download the JSON credentials file and I have set it as an environment variable in the few ways I know how but I still get the above error. Do I need to load this in some way in python? It seems to be looking for it but unable to find is correctly. Is there a special way to set it as an environment variable in this case?
You can do it in Python 2.7 by changing the default dialect from legacy to standard in pd.read_gbq function.
pd.read_gbq(query, 'my-super-project', dialect='standard')
Indeed, you can read in Big Query documentation for the parameter AllowLargeResults:
AllowLargeResults: For standard SQL queries, this flag is
ignored and large results are always allowed.
I have found two ways of directly importing the JSON credentials file. Both based on the original answer in Python BigQuery allowLargeResults with pandas.io.gbq
1) Credit to Tim Swast
First
pip install google-api-python-client
pip install google-auth
pip install google-cloud-core
then
replace
credentials = GoogleCredentials.get_application_default()
in create_service() with
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path/file.json')
2)
Set the environment variable manually in the code like
import os,os.path
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=os.path.expanduser('path/file.json')
I prefer method 2 since it does not require new modules to be installed and is also closer to the intended use of the JSON credentials.
Note:
You must create a destinationTable and add the information to run_query()
Here is a code that fully works within python 2.7 on Windows:
import pandas as pd
my_qry="<insert your big query here>"
### Here Put the data from your credentials file of the service account - all fields are available from there###
my_file="""{
"type": "service_account",
"project_id": "cb4recs",
"private_key_id": "<id>",
"private_key": "<your private key>\n",
"client_email": "<email>",
"client_id": "<id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "<x509 url>"
}"""
df=pd.read_gbq(qry,project_id='<your project id>',private_key=my_file)
That's it :)

Travis-CI, how to get committer_email, author_name, within after_script command

I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.

Google Apps Script ScriptDb permissions issue

I am having an issue trying to query the ScriptDb of a resource file in Google Apps Script. I create a script file (file1), add it as a resource to another script file (file2). I call file1 from file2 to return a handle to its ScriptDb. This works fine. I then try to query the ScriptDb but have a permissions error returned.
Both files owned by same user and in same google environment
See code below:
file 1:
function getMyDb() {
return ScriptDb.getMyDb;
}
file 2 (references file1):
function getDataFromFile1() {
var db = file1.getMyDb(); // This works
var result = db.query({..............}); // This results in a permissions error!
}
I am at a loss to understand why I can access file1 and get back a handle on the ScriptDb, but then am not able to query it, due to an permissions issue.
I have tried to force file1 to require re-authorization, but have not yet been successful. I tried adding a new function and running it, so any suggestions there would be gratefully received.
Thanks in advance
Chris
There appears to be an error in file1/line2. It says "return ScriptDb.getMyDb;" but it should say "return ScriptDb.getMyDb();"
If you leave out the ()s then when you call file1 as a library, file1.getMyDb() will return a function which you store in var db. Then the line var result = db.query({..............}) results in an error because there is no method "query" in the function.
Is that what's causing your error?
I have figured out what the problem was, a misunderstanding on my part regarding authorisation. I was thinking of it in terms of file permissions, when in fact that problem was that my code was not authorised to run the DbScript service. As my code calls a different file and receives back a pointer to a ScriptDb database it is not using the ScriptDb service, so then when it calls the db.query() it invokes the ScriptDb service, for which it is not authorised.
To resolve this I just had to create a dummy function and make a ScriptDb.getMyDb() call, which triggered authorisation for the service. The code then worked fine.
Thanks for the input though.
Chris