Stellar-core transactions not supported - stellar.js

I am getting tx_not_supported. I have compiled stellar-core 1.15.0 and launched the network. I am able to use the horizon and see the root account balance. However when trying a transaction/create account, I am getting tx_not_supported.

Related

What is different between Aptos devnet and testnet?

I'm starting to build a dapp on Aptos and I notice that there are two development networks, devnet and testnet. What are the differences between these two?
Update - 2022-01-26: Previously the testnet faucet required the user to complete a captcha. This is no longer true, the faucets of both networks work similarly, so that section has been removed from the answer.
Release cadence
Devnet is generally released every week. Testnet is generally released every two weeks, after devnet.
This means devnet gets new features sooner and more frequently.
Persistence
With devnet, the chain is reset every release. All data is wiped, including any deployed modules, accounts, etc. and the chain restarts from genesis with a new chain ID. If you're building on devnet this means you must redeploy your Move modules and accounts every week.
Testnet is never wiped, similar to mainnet.
Faucet access
On both devnet and testnet you can create new accounts and get new APT easily by either:
Using the "Faucet" button in your wallet (e.g. in Petra).
Using the FaucetClient in the SDK.
Using the aptos CLI:
aptos account fund-with-faucet --account 0xd0f523c9e73e6f3d68c16ae883a9febc616e484c4998a72d8899a1009e5a89d6
Hitting the faucet directly:
curl -X POST 'https://faucet.devnet.aptoslabs.com/mint?amount=100&address=0xd0f523c9e73e6f3d68c16ae883a9febc616e484c4998a72d8899a1009e5a89d6'
Which should you use?
Generally speaking testnet is a friendlier developer experience because you don't need to keep redeploying your code / recreating accounts. For standard development the amount of APT the testnet faucets give you should be more than sufficient.
Devnet is good for rapid experimentation where you don't care about data persistence or if you're running tests that require programmatic access to APT.

Trigger DAG Run - 403

I am following this tutorial to build a Cloud Function that triggers a DAG run. I have run into a permission issue. Upon the function being triggered and thus trying to run the DAG, I get a permission error message. It reads as follows:
Service account does not have permission to access the IAP-protected application.
I have followed the recommendation in the tutorial to have a service account with the Composer User role. What am I missing?
Note: I am calling Airflow version 2's Stable REST API and my Composer is version 1.
-Diana
I found a perhaps duplicate question here:
Receiving HTTP 401 when accessing Cloud Composer's Airflow Rest API
As Seng Cheong noted in their answer, the reason for this error is that Google Cloud seems to have issues adding service account IDs that are longer than 64 characters to the Airflow list of users. Upon changing my service account ID to one <= 64 characters, I was able to trigger the DAG successfully. If you can't make your service account ID shorter, then Google documentation suggests adding the "numeric user id" corresponding to your service account directly. The steps for how to do so can be found here: https://cloud.google.com/composer/docs/access-airflow-api#access_airflow_rest_api_using_a_service_account
Best of luck friend

SCOPES_WARNING in BigQuery when accessed from a Cloud Compute instance

Every time I use bq on a Cloud Compute instance, I get this:
/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client/contrib/gce.py:73: UserWarning: You have requested explicit scopes to be used with a GCE service account.
Using this argument will have no effect on the actual scopes for tokens
requested. These scopes are set at VM instance creation time and
can't be overridden in the request.
warnings.warn(_SCOPES_WARNING)
This is a default micro in f1 with Debian 8. I gave this instance access to all Cloud APIs and its service account is also an owner of a project. I run gcloud init. But this error persists.
Is there something wrong?
I noticed that this warning did not appear on an older instance running SDK version 0.9.85, however I now get it when creating a new instance or upgrading the the latest Gcloud SDK.
The scopes warning can be safely ignored, as it's just telling you that the only scopes that will be used are the ones specified at instance creation time, which is the expected behavior of the default GCE service account.
It seems the 'bq' tool doesn't distinguish between the default service account on GCE and a regular service account and always tries to set the scopes explicitly. The warning comes from oauth2client, and it looks like it didn't display this warning in versions prior to v2.0.0.
I've created public issue to track this which you can star to get updates:
https://code.google.com/p/google-bigquery/issues/detail?id=557

how to find chef client run successful or not

I want to perform chef operations using api from my own program(java). I would like to know whether my chef client run is successful or not.
What is the best way to find this. Did chef maintains any attribute or store recent chef client run status.
You can get the timestamp of the last successful run from the node data (key is ohai_time), but that's about it for vanilla Chef. More likely what you want is the information for specific runs, which you could get from the Reporting system (part of the Premium add-ons) or by making a custom report/error handler to ship the data to your own system.

Twitter APIs - Twitter4j - sync issue?

I am using Twitter4J to retrieve user timelines, but it stopped working. The number of accepted requests is fine, but I get a autentication problem, probably related to clock sync?
INFO: Error while querying Twitter: 401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
rateLimitStatus=RateLimitStatusJSONImpl{remaining=178, limit=180, resetTimeInSeconds=1432305852, secondsUntilReset=899}, version=3.0.5}
Not sure what to do then. ive tried already to sync my server with ntpdate ntp.ubuntu.com with no luck.
I think you are using SandBox(Build-in VM) of Cloudera/Hortonworks etc
I was also getting the same problem and was trying to sync my clock with 'time.windows.com' clock but I was failed to do. So I moved to 4 nodes cluster which was already existing in my case and there my clock was in sync and I could run my request to Twitter successfully.
Conclusion: Move from Cloudera/Hortonworks VM to own installed OS and make the clock sync.
Hope this help!!!