I have two smart contracts that I want to deploy. I want to deploy the first one, then pass the address of the first into the constructor of the second one. I am new to hardhat-deploy and keep getting caught up with this.
Thanks!
First create file "scripts/deploy.js".
const { ethers } = require("hardhat");
async function main() {
const [deployer] = await ethers.getSigners();
console.log('Deploying contracts with the account: ' + deployer.address);
// Deploy First
const First = await ethers.getContractFactory('FirstContract');
const first = await First.deploy();
// Deploy Second
const Second = await ethers.getContractFactory('SecondContract');
const second = await Second.deploy(first.address);
console.log( "First: " + first.address );
console.log( "Second: " + second.address );
}
main()
.then(() => process.exit())
.catch(error => {
console.error(error);
process.exit(1);
})
Then run this command.
npx hardhat run scripts/deploy.js --network ropsten
The original question specifically refers to the hardhat-deploy NPM package (i.e. the community plugin for hardhat tooling). On that basis the answer provided is not directly correct.
hardhat-deploy's documentation is extensive and thorough. The portion relevant to the deployment of (multiple) contracts is here
The hardhat deployment documentation here may be a little bit cryptic for newcomers. But it is very simple to deploy multiple contracts using hardhat deploy.
First create the deployment scripts in the deploy directory which is in the same level as of contracts directory.
You can name the deployment scripts like 01-deploy-contract-1.js, 02-deploy-contract-2.js etc.
A sample deploy script is as shown below.
01-deploy-contract-1.js
const { ethers, hardhat, upgrades, network } = require("hardhat");
require("dotenv").config();
async function deployFunc(hre) {
if (process.env.DEPLOY_TOKEN_LIB.toLowerCase() === "true") {
console.log("Deploying Contract - 1...");
await deployContract1();
console.log("-------------------------");
}
}
async function deployContract1() {
const networkName = network.name;
let owner = ethers.getSigner();
console.log(`Deploying to '${networkName}' as ${(await owner).address}...`);
const contract1 = await ethers.getContractFactory("Contract1");
lib = await contract1.deploy((await owner).address);
await lib.deployed();
console.log(`Token library contract deployed to ${contract1.address}`);
}
module.exports.default = deployContract1;
module.exports.tags = ["all", "tokenlibrary"];
Now run npx hardhat deploy. It will run all the deployment scripts in the deploy folder.
An added advantage of hardhat deployment scripts is, when you run npx hardhat node, it will automatically deploy all the contracts, and your local node will be ready with all the contracts ready to test.
Related
I am making a cron job instance that is running using Node to run a job that removes posts from my Redis cache.
I want to promisify client.zrem for removing many posts from the cache to insure they are all removed but when running my code I get the error below on line: "client.zrem = util.promisify(client.zrem)"
"TypeError [ERR_INVALID_ARG_TYPE]: The "original" argument must be of type function. Received undefined"
I have another Node instance that runs this SAME CODE with no errors, and I have updated my NPM version to the latest version, according to a similar question for this SO article but I am still getting the error.
TypeError [ERR_INVALID_ARG_TYPE]: The "original" argument must be of type Function. Received type undefined
Any idea how I can fix this?
const Redis = require("redis")
const util = require(`util`)
const client = Redis.createClient({
url: process.env.REDIS,
})
client.zrem = util.promisify(client.zrem) // ERROR THROWN HERE
// DELETE ONE POST
const deletePost = async (deletedPost) => {
await client.zrem("posts", JSON.stringify(deletedPost))
}
// DELETES MANY POSTS
const deleteManyPosts = (postsToDelete) => {
postsToDelete.map(async (post) => {
await client.zrem("posts", JSON.stringify(post))
})
}
module.exports = { deletePost, deleteManyPosts }
Node Redis 4.x introduced several breaking changes. Adding support for Promises was one of those. Renaming the methods to be camel cased was another. Details can be found at in the README in the GitHub repo for Node Redis.
You need to simply delete the offending line and rename the calls to .zrem to .zRem.
I've also noticed that you aren't explicitly connecting to Redis after creating the client. You'll want to do that.
Try this:
const Redis = require("redis")
const client = Redis.createClient({
url: process.env.REDIS,
})
// CONNECT TO REDIS
// NOTE: this code assumes that the Node.js version supports top-level await
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect(); //
// DELETE ONE POST
const deletePost = async (deletedPost) => {
await client.zRem("posts", JSON.stringify(deletedPost))
}
// DELETES MANY POSTS
const deleteManyPosts = (postsToDelete) => {
postsToDelete.map(async (post) => {
await client.zRem("posts", JSON.stringify(post))
})
}
module.exports = { deletePost, deleteManyPosts }
I'm testing my express.js API via importing the root app instance and using it through chai-http. The tests are run via mocha.
As the title say, when I run my tests via Github Actions the results are different than when they are run locally. On Github Actions, I get an automatic 503 error from express on any of the test requests or any type of request, while locally all tests run fine and pass. When the 503 is received the entire test runner hangs as well and doesn't exit until the MongoDB driver times out with an error.
Initially the test timed out so I added --timeout 15000 to bypass it, hopefully temporarily.
This is the general setup and imports done before the tests
import {server} from "../bld/server.js";
import {join} from "path";
import chai from "chai";
import chaiHttp from "chai-http";
import chaiAjv from "chai-json-schema-ajv";
chai.use(chaiHttp);
chai.use(chaiAjv);
const api = !!process.env.TEST_MANUAL
? (process.env.API_ADDRESS ?? "http://localhost") + ":" + (process.env.API_PORT ?? "8000")
: server;
console.log((typeof api == "string")
? `Using manually hosted server: "${api}"`
: "Using imported server instance");
const request = chai.request;
const expect = chai.expect;
And the first test looks like this
describe("verification flow", () => {
const actionUrl = "/" + join("action", "verify", "0");
var response;
describe("phase 0: getting verification code", () => {
it("should return a verification code", async () => {
const res = await request(api).post(actionUrl);
expect(res).to.have.status(201);
expect(res).to.have.json;
expect(res.body).to.have.property("verificationCode");
response = res.body;
});
// ....
});
// ...
});
There are no errors on imports, no errors on attaching the server to chai, and from what I can log no issues with file pathing either. At this point I'm lost to as what could be causing the issue and don't know where to look for the root of the problem.
More specific information below:
Dependencies: https://pastebin.com/tu3n9FkZ
Github Actions: https://pastebin.com/HEk5adWQ
No artifacts (dependencies, builds) have been cached according to logs
Thank you for your time and let me know if you need any more information
I'm using Truffle to develop DAPP. I would like to ask if it's possible to dynamically get the network name during deployment process with dashboard as a spcified network. What I mean by that is I have a deploy-config.js file which holds different configurations for different networks. I also have a 2_deploy_MyContract.js migration file. MyContract expects a struct in the constructor function as a parameter.
const MyContract = artifacts.require('MyContract');
const getConfig = require('../deploy-config');
module.exports = async function (deployer) {
const config = getConfig(currently_selected_network); <-- The Problem
await deployer.deploy(
MyContract,
{
...config.data
}
);
};
When I run truffle migrate --reset --network dashboard I can change the selected network using metamask any time. I would like to somehow fetch the network name it deploys to and pass it as currently_selected_network so my js function can provide proper config values. I think I can try specifying the network name by updating the truffle-config.js file and then only deploy to those predefined networks but using dashboard allows me to not keep the mnemonic inside the repo and sign every transaction by Metamask extension.
If you have any other ideas how to achive this goal I will be more than happy to hear it out!
This is how deploy-config.js looks like
const config = {
network1: {
paramA: "A"
paramB: "B"
},
network2: {
paramA: "C"
paramB: "D"
}
}
function getConfig(networkName) {
switch(networkName) {
case "network1":
return config.network1;
case "network2":
return config.network2;
default:
return null;
}
module.exports = getConfig
On your migration scripts add, network to your anonymous function
I believe order and arg position matter.
e.g.
module.exports = async (deployer, network, accounts) => {
console.log(network);
console.log(accounts);// also useful to have this at hand
}
We are trying to track down a network issue in our company which causes a Browser Disconnect General Error. I want to use RequestLogger timestamp to help us highlight when this intermittent issue occurs and also any additional request/response information at that time.
In the Request Logger documentation the .requestHooks(logger) is initiated at every test case level. And then console.log(logRecord.X.X) is used to log the record at that specific time.
But how can I have a continuous logging throughout my whole test framework without using console.log(logRecord.X.X) on every line?
Is it somehow possible to have the RequestLogger continuously running via my test-runner function?
if(nodeConfig.util.getEnv('NODE_ENV') == "jenkins-ci")
{
// #ts-ignore
// createTestCafe("localhost", port1, port2).then(tc => {
createTestCafe().then(tc => {
this.testcafe = tc;
this.runner = this.testcafe.createRunner();
return this.runner
.src(testPath)
.filter(filterSettings)
.browsers(environment.browserToLaunch)
.concurrency(environment.concurrencyAmount)
.reporter(reporterSettings)
.run(runSettingsCi);
})
.then(failedCount => {
console.log('Location ' + testPath + ' tests failed: ' + failedCount);
this.testcafe.close();
process.exit(0);
})
.catch((err) => {
console.log('Location ' + testPath + ' General Error');
console.log(err);
this.testcafe.close();
process.exit(1);
});
}
TestCafe doesn't allow attaching request hooks with the test runner class. At the same time, you can attach it to each fixture. RequestLogger will collect information about all requests.
For example:
import { Selector, RequestLogger } from 'testcafe';
const logger = RequestLogger();
fixture `Log all requests`
.page`devexpress.github.io/testcafe`
.requestHooks(logger)
.afterEach(() => console.log(logger.requests));
test('Test 1', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Using TestCafe'))
.click(Selector('a').withText('Test API'));
});
test('Test 2', async t => {
await t
.click(Selector('span').withText('Docs'))
.click(Selector('a').withText('Continuous Integration'))
.click(Selector('a').withText('How It Works'));
});
Previously, TestCafe allowed you to attach request hooks to one test or fixture at a time. In the new TestCafe v1.19.0, you can also define global request hooks in a JavaScript configuration file .testcaferc.js to attach them to all fixtures and tests within a test run. You can learn more here: Global Request Hooks.
Please note that you can use the configFile option in CLI and program API to specify the path to a config file.
For the initial usage scenario, you can use the following example:
const { RequestLogger } = require('testcafe');
const logger = RequestLogger();
module.exports = {
hooks: {
request: logger,
},
};
I have NestJs application with TypeORM configured with mysql. I want to have e2e(integration) test and for that reason I want to have in memory database in the tests which I configured this way:
{
type: 'sqlite',
database: ':memory:',
synchronize: true,
dropSchema: true,
entities: [`dist/**/*.entity{.ts,.js}`],
}
And the setup of the tests
beforeEach(async () => {
const moduleFixture: TestingModule =
await Test.createTestingModule({imports: [AppModule, UserModule]})
.overrideProvider(TypeOrmConfigService).useClass(MockTypeOrmConfigService)
.compile();
app = await moduleFixture.createNestApplication();
await app.init();
});
. When running the test I got
AlreadyHasActiveConnectionError: Cannot create a new connection named "default", because connection with such name already exist and it now has an active connection session.
at new AlreadyHasActiveConnectionError (/Users/user/workspace/app/src/error/AlreadyHasActiveConnectionError.ts:8:9)
at ConnectionManager.Object.<anonymous>.ConnectionManager.create (/Users/user/workspace/app/src/connection/ConnectionManager.ts:57:23)
at Object.<anonymous> (/Users/user/workspace/app/src/index.ts:228:35)
at step (/Users/user/workspace/app/node_modules/tslib/tslib.js:136:27)
at Object.next (/Users/user/workspace/app/node_modules/tslib/tslib.js:117:57)
at /Users/user/workspace/app/node_modules/tslib/tslib.js:110:75
at new Promise (<anonymous>)
at Object.__awaiter (/Users/user/workspace/app/node_modules/tslib/tslib.js:106:16)
at Object.createConnection (/Users/user/workspace/app/node_modules/typeorm/index.js:186:20)
at rxjs_1.defer (/Users/user/workspace/app/node_modules/#nestjs/typeorm/dist/typeorm-core.module.js:151:29)
(node:19140) UnhandledPromiseRejectionWarning: AlreadyHasActiveConnectionError: Caught error after test environment was torn down
If I move the setup from beforeEach in beforeAll block it's ok, but I'm afraid that when I create several specs the error will come back. How should be handled properly?
EDIT:
The problem was that each test is making a setup of the application and so creates a new connection.The solution was to use "keepConnectionAlive: true," in order all tests to reuse same connection.
keepCOnnectionAlive: true is the way to go
Using keepConnectionAlive: true produced the following error for me.
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't
stopped in your tests. Consider running Jest with
--detectOpenHandles to troubleshoot this issue.
Adding the below to each e2e test fixed my issue:
afterEach(async () => {
await app.close();
});
Base on 0xCAP's answer, you can do something like this also.
// jest.setup.ts
jest.mock("/path/to/database/config/object", () => {
const { databaseConfig, ...rest } = jest.requireActual("/path/to/database/config/object")
return {
...rest,
databaseConfig: {
...databaseConfig,
keepConnectionAlive: true // replace old config
}
}
})
// jest.config.js
module.exports = {
...other options
setupFilesAfterEnv: ["jest.setup.ts"],
}