After I clone the repository using nodegit and call getHeadCommit(), the Node process holds the directory, which prevents it from being removed by code (fs-extra remove), nor by OS.
console.log((async (): Promise<void> => {
const tempDirectory: string = path.join(process.cwd(), '.tmp');
console.log('clone');
const repository: nodegit.Repository = await nodegit.Clone.clone(
'ssh://git#***.git',
tempDirectory,
{
checkoutBranch: 'master',
fetchOpts: {
callbacks: {
certificateCheck: (): number => 1,
credentials: (url: string, userName: string): nodegit.Cred =>
nodegit.Cred.sshKeyNew(
userName,
path.join(os.homedir(), '.ssh', 'id_rsa.pub'),
path.join(os.homedir(), '.ssh', 'id_rsa'),
''
)
}
}
}
);
console.log('get head commit');
const commit: nodegit.Commit = await repository.getHeadCommit();
console.log('remove');
await fse.remove(tempDirectory); // Here Node hangs
console.log('end');
})());
Error message:
Error: EBUSY: resource busy or locked, unlink '***\.tmp\.git\objects\pack\pack-27924883cff8a0039ced57d07bad35459885ff9d.pack'
Is there an error in my code? Or is there a method in nodegit for releasing the repository directory after using repository.getHeadCommit()?
Oops! I missed the free method. It fixed the problem.
Related
I have two issues that may or may not be related.
Overview
Folder Structure
pages
|---user.vue
server
|---api
|---profile.get.ts
|---profile.post.ts
The tech is Nuxt3 using the server and Supabase.
profile.get.ts
import { serverSupabaseClient, serverSupabaseUser } from "#supabase/server"
import { Database } from "~~/types/supabase"
export default defineEventHandler(async (event) => {
try {
const supabase = serverSupabaseClient<Database>(event)
const user = await serverSupabaseUser(event)
const query = getQuery(event)
const { data, error } = await supabase.from('profiles').select('*').eq('email', query.email).single()
if (error) throw { status: error.code, message: error.message }
return { displayName: data.display_name, avatarUrl: data.avatar_url }
} catch (err) {
console.error('Handled Error:', err)
}
})
profile.post.ts
import { serverSupabaseClient } from "#supabase/server"
import { Database } from "~~/types/supabase"
export default defineEventHandler(async (event) => {
const supabase = serverSupabaseClient<Database>(event)
const { displayName, avatarUrl, email }: { displayName: string, avatarUrl: string, email: string } = await readBody(event)
const { error } = await supabase.from('profiles').update({ display_name: displayName, avatar_url: avatarUrl }).match({ email })
if (error) throw new Error(error.message)
return { status: 200 }
})
user.vue Snippet
onMounted(() => {
setTimeout(() => {
getProfile()
}, 100) // Fails when around 50 or less
})
async function getProfile() {
const { data, error } = await useFetch('/api/profile', { method: 'GET', params: { email: user.value?.email } })
console.log(data.value)
console.log(error.value)
displayName.value = data.value!.displayName || ''
avatarUrl.value = data.value!.avatarUrl || ''
}
Problem 1
When user.vue mounts, I want to call my Nuxt API (profile.get.ts) and fetch user data (display name, avatar url) from the Supabase database. However, I receive this error when fetching on mount: FetchError: 404 Cannot find any route matching /api/profile. (/api/profile). However, if I use setTimeout to 100ms, it fetches fine. That makes me think the API server is simply not ready, but the documentation doesn't mention that and encourages fetching during lifecycle.
Problem 2
Volar seems to be confused about the typing of data from getProfile().
Property 'displayName' does not exist on type '{ status: number; } | { displayName: string | null; avatarUrl: string | null; }'.
Property 'displayName' does not exist on type '{ status: number; }'.ts(2339)
However, this is the typing from profile.post.ts even though I'm using profile.get.ts.
Current Behavior
Without setTimeout at 100ms or greater, it will fail with the 404 message
With setTimeout at 100ms or greater, or with getProfile() called from a button, there is no issue, even with the TypeScript errors, etc.
Desired Behavior
TypeScript correctly recognizes the proper endpoint (profiles.get.ts since I'm calling it with get)
Data can be fetched on mount from the API without the use of setTimeout
Getting the below error while trying to deploy a smart contract from hardhat. Error details
TypeError: Cannot read property 'sendTransaction' of null
at ContractFactory.<anonymous> (C:\Collection\node_modules\#ethersproject\contracts\src.ts\index.ts:1249:38)
at step (C:\Collection\node_modules\#ethersproject\contracts\lib\index.js:48:23)
at Object.next (C:\Collection\node_modules\#ethersproject\contracts\lib\index.js:29:53)
at fulfilled (C:\Collection\node_modules\#ethersproject\contracts\lib\index.js:20:58)
Here are the config files
hardhat.config.js
require('#nomiclabs/hardhat-waffle');
require("#nomiclabs/hardhat-ethers");
require("dotenv").config();
// This is a sample Hardhat task. To learn how to create your own go to
// https://hardhat.org/guides/create-task.html
task("accounts", "Prints the list of accounts", async (taskArgs, hre) => {
const accounts = await hre.ethers.getSigners();
for (const account of accounts) {
console.log(account.address);
}
});
// You need to export an object to set up your config
// Go to https://hardhat.org/config/ to learn more
/**
* #type import('hardhat/config').HardhatUserConfig
*/
module.exports = {
solidity: "0.8.2",
networks: {
mumbai: {
url: process.env.MUMBAI_URL,
account: process.env.PRIVATE_KEY
}
}
};
deploy.js
const {ethers} = require("hardhat");
async function main() {
const SuperMario = await ethers.getContractFactory("SuperMario");
const superInstance = await SuperMario.deploy("SuperMarioCollection", "SMC");
await superInstance.deployed();
console.log("contract was deployed to:", superInstance.address());
await superInstance.mint("https://ipfs.io/ipfs/XXXXXXX");
}
// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});
I am trying to deploy it using the following command
npx hardhat run scripts/deploy.js --network mumbai
thanks
Change account to accounts in the network config
found the fix. There was an error in the hardhat.config file
instead of account:, it should have been
accounts:[process.env.PRIVATE_KEY]
I use ssh command in cypress exec command. ssh command works properly but I get timeout error from exec() function always.
My code is :
cy.exec('ssh username#111.111.1.1 "\file.exe"')
Actually this code works properly and I can see that file.exe works on remote desktop but I get error on exec().
I think I have to put some options inside the ssh command like -q -w . I tried some of them but it did not work.
Could you please help me?
Maybe exec is too limited in the way you can interact with the command being called, you need a programmable API rather than just configuration options.
You can try using a task instead, there is a library node-ssh that will interface with a Cypress task.
I'm not familiar with ssh protocols, so you will have to fill in those details, but here is a basic example for Cypress task and the node-ssh library
cypress/plugins/index.js
module.exports = (on, config) => {
on('task', {
ssh(params) {
const {username, host, remoteCommand} = params // destructure the argument
const {NodeSSH} = require('node-ssh')
const ssh = new NodeSSH()
ssh.connect({
host: host,
username: username,
privateKey: '/home/steel/.ssh/id_rsa' // maybe parameterize this also
})
.then(function() {
ssh.execCommand(remoteCommand, { cwd:'/var/www' })
.then(function(result) {
console.log('STDOUT: ' + result.stdout)
console.log('STDERR: ' + result.stderr)
})
})
return null
},
})
}
Returning result
The above ssh code is just from the example page of node-ssh. If you want to return the result value, I think this will do it.
module.exports = (on, config) => {
on('task', {
ssh(params) {
const {username, host, remoteCommand} = params // destructure the argument
// returning promise, which is awaited by Cypress
return new Promise((resolve, reject) => {
const {NodeSSH} = require('node-ssh')
const ssh = new NodeSSH()
ssh.connect({
host: host,
username: username,
privateKey: '/home/steel/.ssh/id_rsa'
})
.then(function() {
ssh.execCommand(remoteCommand, { cwd:'/var/www' })
.then(function(result) {
resolve(result) // resolve to command result
})
})
})
},
})
}
In the test
cy.task('ssh', {username: 'username', host: '111.111.1.1', command: '\file.exe'})
.then(result => {
...
})
Only a single parameter is allowed for a task, so pass in an object and destructure it inside the task as shown above.
Is it possible to use CloudFlare's Workers KV when developing a Svelte/kit application?
It is possible to build the app then run wrangler dev when using the CloudFlare Workers adapter:
npm build
wrangler dev
However, I haven't gotten hot module reloading working:
npm dev & wrangler dev
As far as I know, there's no way to emulate Workers KV locally. However, I setup a local Redis instance as a substitute.
Then, I created some wrapper functions for the KV store. In development, it talks to Redis, and in production it talks to Workers KV. For instance, here's the wrapper function for get.
import { dev } from '$app/env'
import redis from 'redis'
const client = redis.createClient()
const get = promisify(client.get).bind(client)
export const getKvValue = async (key: string): Promise<string | null> => {
return dev ? await get(key) : await KV.get(key)
}
Update: You can actually make things much simpler by just using an object in JavaScript—no need to download and run a Redis binary. Just make sure to JSON.stringify the values before setting them.
import { dev } from '$app/env'
const devKvStore = {}
const devGetKvValue = (key: string) => {
return new Promise((resolve) => {
resolve(devKvStore[key] ?? null)
})
}
const devSetKvValue = (key: string, value: unknown) => {
return new Promise((resolve) => {
devKvStore[key] = JSON.stringify(value)
resolve()
})
}
export const getKvValue = async (key: string): Promise<string | null> => {
return dev ? await devGetKvValue(key) : await KV.get(key)
}
export const setKvValue = async (key: string, value: unknown): Promise<void> => {
return dev ? await devSetKvValue(key, value) : await KV.put(key, value)
}
I have read the documentation and I have followed the tutorial step by step and I only have managed to run the app.
Documentation: http://electron.atom.io/docs/tutorial/using-selenium-and-webdriver/
The connection with chromedriver I cannot make it work, when I launch the test and try click a simple button I get this:
Error: ChromeDriver did not start within 5000ms at Error (native)
at node_modules/spectron/lib/chrome-driver.js:58:25 at
Request._callback (node_modules/spectron/lib/chrome-driver.js:116:45)
at Request.self.callback
(node_modules/spectron/node_modules/request/request.js:200:22) at
Request.
(node_modules/spectron/node_modules/request/request.js:1067:10) at
IncomingMessage.
(node_modules/spectron/node_modules/request/request.js:988:12) at
endReadableNT (_stream_readable.js:913:12) at _combinedTickCallback
(internal/process/next_tick.js:74:11) at process._tickCallback
(internal/process/next_tick.js:98:9)
My code:
"use strict";
require("co-mocha");
var Application = require('spectron').Application;
var assert = require('assert');
const webdriver = require('selenium-webdriver');
const driver = new webdriver.Builder()
.usingServer('http://127.0.0.1:9515')
.withCapabilities({
chromeOptions: {
binary: "./appPath/app"
}
})
.forBrowser('electron')
.build();
describe('Application launch', function () {
this.timeout(100000);
var app;
beforeEach(function () {
app = new Application({
path: "./appPath/app"
});
return app.start();
});
afterEach(function () {
if (app && app.isRunning()) {
return app.stop();
}
});
it('click a button', function* () {
yield driver.sleep(5000);
yield driver.findElement(webdriver.By.css(".classSelector")).click();
});
});
Thanks and sorry for my English.
I recommend you to use Spectron. which is a less painful way of testing your electron app. in my opinion perfect combination is using it with Ava test framework, which allows the concurrently test.
async & await is also another big win. which allows you to have so clean test cases.
and also if you have a test which needs to happen serial, you can use test.serial
test.serial('login as new user', async t => {
let app = t.context.app
app = await loginNewUser(app)
await util.screenshotCreateOrCompare(app, t, 'new-user-mission-view-empty')
})
test.serial('Can Navigate to Preference Page', async t => {
let app = t.context.app
await app.client.click('[data-test="preference-button"]')
await util.screenshotCreateOrCompare(app, t, 'new-user-preference-page-empty')
})
and just for reference; my helper test cases.
test.before(async t => {
app = util.createApp()
app = await util.waitForLoad(app, t)
})
test.beforeEach(async t => {
t.context.app = app
})
test.afterEach(async t => {
console.log('test complete')
})
// CleanUp
test.after.always(async t => {
// This runs after each test and other test hooks, even if they
failed
await app.client.localStorage('DELETE', 'user')
console.log('delete all files')
const clean = await exec('rm -rf /tmp/DesktopTest')
await clean.stdout.on('data', data => {
console.log(util.format('clean', data))
})
await app.client.close()
await app.stop()
})
util function,
// Returns a promise that resolves to a Spectron Application once the app has loaded.
// Takes a Ava test. Makes some basic assertions to verify that the app loaded correctly.
function createApp (t) {
return new Application({
path: path.join(__dirname, '..', 'node_modules', '.bin',
'electron' + (process.platform === 'win32' ? '.cmd' : '')),
// args: ['-r', path.join(__dirname, 'mocks.js'), path.join(__dirname, '..')],
env: {NODE_ENV: 'test'},
waitTimeout: 10e3
})
}
First off, Spectron (which is a wrapper for WebdriverIO) and WebdriverJS (which is part of Selenium-Webdriver) are two different frameworks, you only need to use one of them for your tests.
If you are using WebdriverJS, then you need to run ./node_modules/.bin/chromedriver in this step: http://electron.atom.io/docs/tutorial/using-selenium-and-webdriver/#start-chromedriver
I could get ChromeDriver working by adding a proxy exception in my terminal.
export {no_proxy,NO_PROXY}="127.0.0.1"