who is the owner of the contracts deployed using truffle? - solidity

I am using testrpc and truffle to test my contract.
When I type truffle migrate , this will deploy my contract to the testrpc network.
My question is , which account (from testrpc accounts) has been used to deploy the contract.
In other word, whose the contract owner?
Thank you in advance

By default the owner is accounts[0] so the first account on the list but you can set the owner by adding "from" in the truffle.js config file
module.exports = {
networks: {
development: {
host: "localhost",
port: 8545,
network_id: "*",
from: "0xda9b1a939350dc7198165ff84c43ce77a723ef73"
}
}
};
for more information about migration see this
https://github.com/trufflesuite/truffle/issues/138

Related

How to Use Azure Managed Identity as Credential For Debugging Cosmos Database access inside of Blazer Server App?

Background:
As described in bullet #4 of my other post I'm trying to follow CDennig's example bicep script to grant my blazor (server) application access to both my key vault (to get the AAD client secret) and the azure cosmos db. CDennig's bicep script grants a managed identity (MI) access (RBAC) to the cosmos database. Unlike CDennig's use of a kubernetes's cluster, I'm using a App Service Webapp and assigning the MI as the user assigned principal of the webapp.
When I deploy to the Azure App Service as a web app, the deployment fails with no error messages... I strongly suspect the problem is that the managed identity (MI) is being denied access to my key vault in spite having granted it my bicep script.
Questions:
How can I run my blazor web app locally on my development machine using the MI as the credential to simulate the environment of running inside the App Serviced webapp and confirm my hypothesis that the key vault access is the problem?
If it is the problem, how would I fix it? See my bicep script below where I grant the MI access to the key vault.
2022 April 17 Sun Morning Update:
Problem has been partially fixed by granting both the system assigned and user assigned MI for the App Service Web App access to the key vault and I can now log in when running on Azure. It seems that Azure is ignoring my user assigned MI for access to the key vault and I suspect it is also ignoring the RBAC for the user assigned MI RBAC. When running locally on my development machine I can also write to the cosmos database because of the RBAC applied to my personal Azure account.
I would still like to know how to run under the user assigned MI when running locally on my development machine. Is this possible with one of the DefaultAzureCredential classes?
And of course, I would still like to know how to access cosmos database via RBAC (no passwords) when deployed to Azure.
2022 April 17 Sun Evening Update: Progress!
When I grant the RBAC access to the system assigned service principal for the Azure App Service Web App using powershell, it works and I can access the cosmos database in azure app service... Please help me fix my bicep script. When I try to fix my script by assigning RBAC to the user assigned service principal I get the error described here.
It seems that this could be done with DefaultAzureCredential I'm not clear on how to do this.
Here is my bicep script:
/*
* Begin commands to execute this file using Azure CLI with PowerShell
* $name='AADB2C_BlazorServerDemo'
* $rg="rg_$name"
* $loc='westus2'
* echo az.cmd group create --location $loc --resource-group $rg
* az.cmd group create --location $loc --resource-group $rg
* echo Set-AzDefault -ResourceGroupName $rg
* Set-AzDefault -ResourceGroupName $rg
* echo begin create deployment group
* az.cmd identity create --name umid-cosmosid --resource-group $rg --location $loc
* $MI_PRINID=$(az identity show -n umid-cosmosid -g $rg --query "principalId" -o tsv)
* write-output "principalId=${MI_PRINID}"
* az.cmd deployment group create --name $name --resource-group $rg --template-file deploy.bicep --parameters '#deploy.parameters.json' --parameters managedIdentityName=umid-cosmosid ownerId=$env:AZURE_OBJECTID --parameters principalId=$MI_PRINID
* Get-AzResource -ResourceGroupName $rg | ft
* echo end create deployment group
* End commands to execute this file using Azure CLI with Powershell
*
*/
#description('AAD Object ID of the developer so s/he can access key vault when running on development')
param ownerId string
#description('Principal ID of the managed identity')
param principalId string
var principals = [
principalId
ownerId
]
#description('The base name for resources')
param name string = uniqueString(resourceGroup().id)
#description('The location for resources')
param location string = resourceGroup().location
#description('Cosmos DB Configuration [{key:"", value:""}]')
param cosmosConfig object
#description('Azure AD B2C Configuration [{key:"", value:""}]')
param aadb2cConfig object
#description('Azure AD B2C App Registration client secret')
#secure()
param clientSecret string
#description('Dummy Azure AD B2C App CosmosConnectionString')
#secure()
param cosmosConnectionString string=newGuid()
#description('The web site hosting plan')
#allowed([
'F1'
'D1'
'B1'
'B2'
'B3'
'S1'
'S2'
'S3'
'P1'
'P2'
'P3'
'P4'
])
param sku string = 'F1'
#description('The App Configuration SKU. Only "standard" supports customer-managed keys from Key Vault')
#allowed([
'free'
'standard'
])
param configSku string = 'free'
resource config 'Microsoft.AppConfiguration/configurationStores#2020-06-01' = {
name: 'asc-${name}config'
location: location
sku: {
name: configSku
}
resource Aadb2cConfigValues 'keyValues#2020-07-01-preview' = [for item in items(aadb2cConfig): {
name: 'AzureAdB2C:${item.key}'
properties: {
value: item.value
}
}]
resource CosmosConfigValues 'keyValues#2020-07-01-preview' = [for item in items(cosmosConfig): {
name: 'CosmosConfig:${item.key}'
properties: {
value: item.value
}
}]
resource aadb2cClientSecret 'keyValues#2020-07-01-preview' = {
// Store secrets in Key Vault with a reference to them in App Configuration e.g., client secrets, connection strings, etc.
name: 'AzureAdB2C:ClientSecret'
properties: {
// Most often you will want to reference a secret without the version so the current value is always retrieved.
contentType: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8'
value: '{"uri":"${kvaadb2cSecret.properties.secretUri}"}'
}
}
resource cosmosConnectionStringSecret 'keyValues#2020-07-01-preview' = {
// Store secrets in Key Vault with a reference to them in App Configuration e.g., client secrets, connection strings, etc.
name: 'CosmosConnectionStringSecret'
properties: {
// Most often you will want to reference a secret without the version so the current value is always retrieved.
contentType: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8'
value: '{"uri":"${kvCosmosConnectionStringSecret.properties.secretUri}"}'
}
}
}
resource kv 'Microsoft.KeyVault/vaults#2019-09-01' = {
// Make sure the Key Vault name begins with a letter.
name: 'kv-${name}'
location: location
properties: {
sku: {
family: 'A'
name: 'standard'
}
tenantId: subscription().tenantId
accessPolicies: [
{
tenantId: subscription().tenantId
objectId: ownerId
permissions:{
secrets:[
'all'
]
}
}
{
tenantId: subscription().tenantId
objectId: principalId
permissions: {
// Secrets are referenced by and enumerated in App Configuration so 'list' is not necessary.
secrets: [
'get'
]
}
}
]
}
}
// Separate resource from parent to reference in configSecret resource.
resource kvaadb2cSecret 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = {
name: '${kv.name}/AzureAdB2CClientSecret'
properties: {
value: clientSecret
}
}
resource kvCosmosConnectionStringSecret 'Microsoft.KeyVault/vaults/secrets#2019-09-01' = {
name: '${kv.name}/CosmosConnectionStringSecret'
properties: {
value: cosmosConnectionString
}
}
resource plan 'Microsoft.Web/serverfarms#2020-12-01' = {
name: '${name}plan'
location: location
sku: {
name: sku
}
kind: 'linux'
properties: {
reserved: true
}
}
#description('Specifies managed identity name')
param managedIdentityName string
resource msi 'Microsoft.ManagedIdentity/userAssignedIdentities#2018-11-30' existing = {
name: managedIdentityName
}
// https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/web-app-managed-identity-sql-db/main.bicep#L73
resource web 'Microsoft.Web/sites#2020-12-01' = {
name: '${name}web'
location: location
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${msi.id}': {}
}
}
properties: {
httpsOnly: true
serverFarmId: plan.id
siteConfig: {
linuxFxVersion: 'DOTNETCORE|6'
connectionStrings: [
{
name: 'AppConfig'
connectionString: listKeys(config.id, config.apiVersion).value[0].connectionString
}
]
}
}
}
output appConfigConnectionString string = listKeys(config.id, config.apiVersion).value[0].connectionString
// output siteUrl string = 'https://${web.properties.defaultHostName}/'
output vaultUrl string = kv.properties.vaultUri
var dbName = 'rbacsample'
var containerName = 'data'
// Cosmos DB Account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts#2021-06-15' = {
name: 'cosmos-${uniqueString(resourceGroup().id)}'
location: location
kind: 'GlobalDocumentDB'
properties: {
consistencyPolicy: {
defaultConsistencyLevel: 'Session'
}
locations: [
{
locationName: location
failoverPriority: 0
}
]
capabilities: [
{
name: 'EnableServerless'
}
]
disableLocalAuth: false // switch to 'true', if you want to disable connection strings/keys
databaseAccountOfferType: 'Standard'
enableAutomaticFailover: true
publicNetworkAccess: 'Enabled'
}
}
// Cosmos DB
resource cosmosDbDatabase 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases#2021-06-15' = {
name: '${cosmosDbAccount.name}/${dbName}'
location: location
properties: {
resource: {
id: dbName
}
}
}
// Data Container
resource containerData 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers#2021-06-15' = {
name: '${cosmosDbDatabase.name}/${containerName}'
location: location
properties: {
resource: {
id: containerName
partitionKey: {
paths: [
'/partitionKey'
]
kind: 'Hash'
}
}
}
}
#batchSize(1)
module cosmosRole 'cosmosRole.bicep' = [for (princId, jj) in principals: {
name: 'cosmos-role-definition-and-assignment-${jj}'
params: {
cosmosDbAccountId: cosmosDbAccount.id
cosmosDbAccountName: cosmosDbAccount.name
principalId: princId
it: jj
}
}]
Here is the module to create the role assignments and definitions:
#description ('cosmosDbAccountId')
param cosmosDbAccountId string
#description ('cosmosDbAccountName')
param cosmosDbAccountName string
#description('iteration')
param it int
#description('Principal ID of the managed identity')
param principalId string
var roleDefId = guid('sql-role-definition-', principalId, cosmosDbAccountId)
var roleDefName = 'Custom Read/Write role-${it}'
var roleAssignId = guid(roleDefId, principalId, cosmosDbAccountId)
resource roleDefinition 'Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions#2021-06-15' = {
name: '${cosmosDbAccountName}/${roleDefId}'
properties: {
roleName: roleDefName
type: 'CustomRole'
assignableScopes: [
cosmosDbAccountId
]
permissions: [
{
dataActions: [
'Microsoft.DocumentDB/databaseAccounts/readMetadata'
'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*'
]
}
]
}
}
resource roleAssignment 'Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments#2021-06-15' = {
name: '${cosmosDbAccountName}/${roleAssignId}'
properties: {
roleDefinitionId: roleDefinition.id
principalId: principalId
scope: cosmosDbAccountId
}
}

"Execution failed" when setting up API Gateway and Fargate with AWS CDK

I am trying to setup AWS API Gateway to access a fargate container in a private VPC as described here. For this I am using AWS CDK as described below. But when I curl the endpoint after successful cdk deploy I get "Internal Server Error" as a response. I can't find any additional information. For some reason API GW can't reach the container.
So when I curl the endpoint like this:
curl - i https://xxx.execute-api.eu-central-1.amazonaws.com/prod/MyResource
... I get the following log output in cloud watch:
Extended Request Id: NpuEPFWHliAFm_w=
Verifying Usage Plan for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21. API Key: API Stage: ...
PI Key authorized because method 'ANY /MyResource/{proxy+}' does not require API Key. Request will not contribute to throttle or quota limits
Usage Plan check succeeded for API Key and API Stage ...
Starting execution for request: 757c6b9e-c4af-4dab-a5b1-542b15a1ba21
HTTP Method: GET, Resource Path: /MyResource/test
Execution failed due to configuration error: There was an internal error while executing your request
CDK Code
First I create a network load balanced fargate service:
private setupService(): NetworkLoadBalancedFargateService {
const vpc = new Vpc(this, 'MyVpc');
const cluster = new Cluster(this, 'MyCluster', {
vpc: vpc,
});
cluster.connections.allowFromAnyIpv4(Port.tcp(5050));
const taskDefinition = new FargateTaskDefinition(this, 'MyTaskDefinition');
const container = taskDefinition.addContainer('MyContainer', {
image: ContainerImage.fromRegistry('vad1mo/hello-world-rest'),
});
container.addPortMappings({
containerPort: 5050,
hostPort: 5050,
});
const service = new NetworkLoadBalancedFargateService(this, 'MyFargateServie', {
cluster,
taskDefinition,
assignPublicIp: true,
});
service.service.connections.allowFromAnyIpv4(Port.tcp(5050));
return service;
}
Next I create the VpcLink and the API Gateway:
private setupApiGw(service: NetworkLoadBalancedFargateService) {
const api = new RestApi(this, `MyApi`, {
restApiName: `MyApi`,
deployOptions: {
loggingLevel: MethodLoggingLevel.INFO,
},
});
// setup api resource which forwards to container
const resource = api.root.addResource('MyResource');
resource.addProxy({
anyMethod: true,
defaultIntegration: new HttpIntegration('http://localhost.com:5050', {
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
}),
},
proxy: true,
}),
defaultMethodOptions: {
authorizationType: AuthorizationType.NONE,
},
});
resource.addMethod('ANY');
this.addCorsOptions(resource);
}
Anyone has a clue what is wrong with this config?
After hours of trying I finally figured out that the security groups do not seem to be updated correctly when setting up the VpcLink with CDK. Broadening the allowed connection with
service.service.connections.allowFromAnyIpv4(Port.allTraffic())
solved it. Still need to figure out which minimum set needs to be set instead of allTrafic()
Additionally I replaced localhost in the HttpIntegration by the endpoint of the load balancer like this:
resource.addMethod("ANY", new HttpIntegration(
'http://' + service.loadBalancer.loadBalancerDnsName,
{
httpMethod: 'ANY',
options: {
connectionType: ConnectionType.VPC_LINK,
vpcLink: new VpcLink(this, 'MyVpcLink', {
targets: [service.loadBalancer],
vpcLinkName: 'MyVpcLink',
})
},
}
))

Best way to configure truffle `from` address

I'm setting up my truffle config file and I'm setting the from address from an env variable like this:
module.exports = {
networks: {
local: {
host: "127.0.0.1",
port: 8545,
network_id: "*",
from: process.env.OWNER,
}
}
};
Then I run OWNER=<address> truffle migrate --network local
Any suggestions on a better way to do this, to get truffle to use the first address generated by ganache?
If you omit the from parameter in your truffle.cfg, it will automatically default to the first account returned by web3.eth.getAccounts from the provider you're connected to.
If you want more dynamic control over the account used, you can control this with the deployer.
var SimpleContract = artifacts.require("SimpleContract");
module.exports = function(deployer, network, accounts) {
deployer.deploy(SimpleContract, { from: accounts[1] }); // Deploy contract from the 2nd account in the list
deployer.deploy(SimpleContract, { from: accounts[2] }); // Deploy the same contract again (different address) from the 3rd account.
};
Of course, you don't have to use the account list passed in and you can pull in a list from any other data source you want. You can also use network if you want to have environment specific logic.

Unable to connect Ganache with Truffle/Npm Dev server

I am able to work with Truffle and Ganache-cli. Have deployed the contract and can play with that using truffle console
truffle(development)>
Voting.deployed().then(function(contractInstance)
{contractInstance.voteForCandidate('Rama').then(function(v)
{console.log(v)})})
undefined
truffle(development)> { tx:
'0xe4f8d00f7732c09df9e832bba0be9f37c3e2f594d3fbb8aba93fcb7faa0f441d',
receipt:
{ transactionHash:
'0xe4f8d00f7732c09df9e832bba0be9f37c3e2f594d3fbb8aba93fcb7faa0f441d',
transactionIndex: 0,
blockHash:
'0x639482c03dba071973c162668903ab98fb6ba4dbd8878e15ec7539b83f0e888f',
blockNumber: 10,
gasUsed: 28387,
cumulativeGasUsed: 28387,
contractAddress: null,
logs: [],
status: '0x01',
logsBloom: ... }
Now when i started a server using "npm run dev". Server started fine but is not connecting with the Blockchain
i am getting the error
Uncaught (in promise) Error: Contract has not been deployed to detected network (network/artifact mismatch)
This is my truffle.js
// Allows us to use ES6 in our migrations and tests.
require('babel-register')
module.exports = {
networks: {
development: {
host: '127.0.0.1',
port: 8545,
network_id: '*', // Match any network id
gas: 1470000
}
}
}
Can you please guide me how i can connect ?
Solve the issue.
issue was at currentProvider, i gave the url of ganache blockchain provider and it worked.
if (typeof web3 !== 'undefined') {
console.warn("Using web3 detected from external source like Metamask")
// Use Mist/MetaMask's provider
// window.web3 = new Web3(web3.currentProvider);
window.web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:7545"));
} else {
console.warn("No web3 detected. Falling back to http://localhost:8545. You should remove this fallback when you deploy live, as it's inherently insecure. Consider switching to Metamask for development. More info here: http://truffleframework.com/tutorials/truffle-and-metamask");
// fallback - use your fallback strategy (local node / hosted node + in-dapp id mgmt / fail)
window.web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"));
}
In your truffle.js, change 8545 to 7545.
Or, in Ganache (GUI), click the gear in the upper right corner and change the port number from 7545 to 8545, then restart. With ganache-cli use -p 8545 option on startup to set 8545 as the port to listen on.
Either way, the mismatch seems to be the issue; these numbers should match. This is a common issue.
Also feel free to check out ethereum.stackexchange.com. If you want your question moved there, you can flag it and leave a message for a moderator to do that.

Spring Cloud Config (Vault backend) teminating too early

I am using Spring Cloud Config Server to serve configuration for my client apps. To facilitate secrets configuration I am using HashiCorp Vault as a back end. For the remainder of the configuration I am using a GIT repo. So I have configured the config server in composite mode. See my config server bootstrap.yml below:-
server:
port: 8888
spring:
profiles:
active: local, git, vault
application:
name: my-domain-configuration-server
cloud:
config:
server:
git:
uri: https://mygit/my-domain-configuration
order: 1
vault:
order: 2
host: vault.mydomain.com
port: 8200
scheme: https
backend: mymount/generic
This is all working as expected. However, the token I am using is secured with a Vault auth policy. See below:-
{
"rules": "path "mymount/generic/myapp-app,local" {
policy = "read"
}
path "mymount/generic/myapp-app,local/*" {
policy = "read"
}
path "mymount/generic/myapp-app" {
policy = "read"
}
path "mymount/generic/myapp-app/*" {
policy = "read"
}
path "mymount/generic/application,local" {
policy = "read"
}
path "mymount/generic/application,local/*" {
policy = "read"
}
path "mymount/generic/application" {
policy = "read"
}
path "mymount/generic/application/*" {
policy = "read"
}"
}
My issue is that I am not storing secrets in all these scopes. I need to specify all these paths just so I can authorize the token to read one secret from mymount/generic/myapp-app,local. If I do not authorize all the other paths the VaultEnvironmentRepository.read() method returns a 403 HTTP status code (Forbidden) and throws a VaultException. This results in complete failure to retrieve any configuration for the app, including GIT based configuration. This is very limiting as client apps may have multiple Spring profiles that have nothing to do with retrieving configuration items. The issue is that config server will attempt to retrieve configuration for all the active profiles provided by the client.
Is there a way to enable fault tolerance or lenience on the config server, so that VaultEnvironmentRepository does not abort and returns any configuration that it is actually authorized to return?
Do you absolutely need the local profile? Would you not be able to get by with just the 'vault' and 'git' profiles in Config Server and use the 'default' profile in each Spring Boot application?
If you use the above suggestion then the only two paths you'd need in your rules (.hcl) file are:
path "mymount/generic/application" {
capabilities = ["read", "list"]
}
and
path "mymount/generic/myapp-app" {
capabilities = ["read", "list"]
}
This assumes that you're writing configuration to
vault write mymount/generic/myapp-app
and not
vault write mymount/generic/myapp-app,local
or similar.