Deploying a Smart Contract - solidity

If I were to deploy a smart contract for NFT's that I would want to sell which is the best way to do this. And is utilising Injected Web3 on Remix.ethereum.org an acctual option.

Also, if you have a large NFT project, it will only be realistically possible to deploy large amounts of NFTs using scripts in a solidity development framework.

If your deployment process simple, Remix IDE's injected web3 is a viable option.
If you need to run deployment scripts, for example to mint after deployment, run some test transactions, share your contract address on twitter(?) than frameworks such as hardhat are really useful. Because you can create complex deployment scripts with them. For hardhat, that is a JS file.
You can also do it with your bare hands. Compile your code with solc, get the bytecode maybe get the ABI too, send a transaction to the empty address putting your bytecode as data and have a contract on blockchain.

Related

Project organization for .net core local development

I am relatively new to dotnet development and have a problem how to setup my local dev environment. I have a asp net core rest api, that uses different "services" internally. The project ist relatively huge so we have split our services into external libraries which we pull in via dependencies. That works perfectly fine. The only problem i have is when integrating a new service or when developing new features of an existing service and i want to build / test the api with my new locally build service.
Is this possible? What would be an easy was to handle this use case? Removing the external dependency and adding a local one during development? I googled and searched here on stackoverflow but did not find any solution.
Thanks for your help!
Regards
Sebastian
To debug nuget packages you can use symbols packages to publish symbols along with with your dlls. Visual studio supports symbol sources that allow to download symbols and debug the source code. But the problem can be deeper...
Ideally the external library should have a stable interface that can be tested by auto tests that are near the library itself. Even significant changes shouldn't cause any changes and problems in the calling code. In that case removing the external dependency and adding a local one during development should be OK since it is very rarely action.
If you need to debug your packages very often usually it means that both parts get changed together often. It can signal the problems listed below:
Your code and code of the package have high coupling. The boundary between modules is selected unsuccessfully. In that case consider to redesigning both parts to make them more independent and decrease the coupling. Independent parts usually don't require to debug them together.
Both parts have high cohesion. In that case you definitely need to keep it together.
Library has an unstable interface. In that case keep it together until the interface becomes stable.
I solved it using the Condition attribute on the ItemGroup element on my csproj file in combination with some sort of local flag files. The csproj config looks like:
<!-- erp service -->
<ItemGroup Condition="Exists('..\.localdev_erp')">
<ProjectReference Include="..\..\erp-svc\MyCompany.Service.Erp\MyCompany.Service.Erp.csproj" />
</ItemGroup>
<ItemGroup Condition="!Exists('..\.localdev_erp')">
<PackageReference Include="MyCompany.Service.Erp" Version="1.0.9" />
</ItemGroup>
The files can easily by created and removed locally, only a dotnet restore may required after adding / removing a local reference. Small drawback to this solution is that a somewhat homogenous local environment is required for all the developers.
We use this solution for the following use cases:
integration / onboarding of new services
explorative programming of the new features / technologies
bug hunting in the integration layer, may require to write regression tests on both sides, the api and the service.

Best IDE / Plugin for develop Solidity

I am developing some complex solidity smartcontracts (using some external libraries such as Oraclize).
The think is that the IDE that I am using for the moment Remix and Oraclize IDE dosen't fit with the requirments that I want, I need:
To compile, deploy and test a smartcontract that can use Oraclize library
Have the files in local and be allowed to use a private github repository
Compile the contracts only when clicking Ctrl + S
Have a desktop environment (Like IntellIJ or Atom)
I have tried some plugins like etheratom (With lots of smartcontracts the program brokes), Intellij solidity plugin (In this one I don't know how to compile and deploy the contracts).
EDIT:
And I missed a very important feature that I want and that dosen't have any IDE that I have tryed.
Give the exact position of errors like invalid opcode
I have serched a lot and I didn't find anything.
Since Solidity is relatively young a lot of actions has to be done manually for setting up compiling and deployment process.
In fact there are tools to help you the processes
Truffle is a development environment and testing framework and asset aiming to make life as an Ethereum developer easier.
Ganache-CLI or Ganache-GUI - Ethereum RPC client for testing and development.
Intellij-Solidity   is a plugin for Itnellij-based IDEs offers syntax highlighting, code formatting and autocomplete for Solidity files.
Solidity Development: Setting up environment
IMHO, as a previous user of Atom and IntelliJ, I will recommend VSCode.
I find these extentions really great for developing smart contracts with solidity on VSCode:
Material Icon Theme, for better distinguish folders and files;
solidity or solidity-solhint, Ethereum Solidity Language for Visual Studio Code;
Trailing spaces, highlight trailing spaces and delete them;
Rainbow Brackets, for cool brackets; :)
Indent rainbow, for better and easier indentation
GitHistory and GitLens.
You should have a better experience with that and then try to test the code in the plugin.

Why akka.persistence is still having beta release? Is it stable?

Why akka.persistence is still having beta release on nuget packages. Does it imply it is still not stable and not good for used in production applications?
In Akka.NET in order to get out of prerelease, a package must meet multiple criteria, like:
Having full test suite up and running. In case of clustered plugins, this also includes multi-node tests.
Having a fixed API. There are dedicated API Approval tests ensuring, that no public API has been accidentally changed.
Having a battery of performance tests. While many of plugins are ready and usually fast without it, stress tests are needed in order to check if any of the merged pull requests didn't introduce any performance penalties.
Having all documentation writen and published.
While this is a lot, not all of these are necessary to make plugin functional. In case of Akka.Persistence there are minor changes (like deprecation of PersistentView in favor of persistence queries), but the plugin itself is production ready and used as such already. However maturity of persistent backend plugins, that are used underneat, may vary.
Akka.Persistence is stable now. You can download it by running following command in Package Manager Console
Install-Package Akka.Persistence

Write a YARN application for a Non-JVM application

Assume I want to use yarn cluster to run a Non-JVM distributed application (e.g. .Net based. is this a good idea?). From what I have read so far, I need to develop a YARN application that consists of
a YARN client that is used to submit the job to the yarn framework
a YARN ApplicationMaster, which is the core to orchestra the application in the cluster.
It seems that the two pieces need to be written using Yarn APIs, which are offered as Jar libraries. It means they have to be written using one of the JVM languages. It seems it's possible to write the YARN client with REST APIs, correct? If yes, it means the client can be written with any language (e.g. C# on .Net). However, for application master, this does not seem to be the case, and it has to use JVM. Correct?
I'm new to YARN. Just want to confirm whether my understanding is correct or not.
The YARN Client and AppMaster need to be written in Java as they are the ones that write to the YARN Java API. The RESTful API, https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html, is really about offering up the commands that you can do from the CLI.
Fortunately, your "container" processes can be just created with just about anything. http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/ says it best with the following quote:
"This allows the ApplicationMaster to work with the NodeManager to launch containers ranging from simple shell scripts to C/Java/Python processes on Unix/Windows to full-fledged virtual machines (e.g. KVMs)."
That said, if you are trying to bring a non-Java application (or even a Java one!) that is already a distributed application of sorts then the Slider framework, http://slider.incubator.apache.org/, is probably the best starting point for you.

Using Nuget in development environment - best practices / how to

Trying to figure out the best way to use Nuget in a development environment to manage our own libraries.
We want to standardize on Nuget way of doing things for our 3rd party libs, but would also like to use Nuget to manage our internal utility libraries, for developers consuming the in house libs this is great and everyones happy. However, for devs actively working on the Utility lib it seems to be more problematic, their previous process of build lib , build main app , F5 and go is now slowed down with publishing, and updating and potentially lots of packages, not to mention the moaning about additional process!
We use TDD on the internal libs but everyone needs to be able to debug and modify libs along with main app, have seen Phil Haacks demo on debug packages in 1.3 and read David Ebbos blog, but that fits different scenario.
So what is the best process for dev/debug cycles? if to use Nuget then we need to accept the existing constraints, or is there a hybrid practice people are using and maybe 1.3 gets closer to automating all this, or do we just avoid Nuget for internal packages which would be a real shame.
Loving Nuget, maybe wanting way to much from the little guy, feedback appreciated.
Thanks
I'd suggest you use separate network shares or feeds (similar to what myget.org supports in the cloud) for different scenarios.
You could imagine creating a CI share, a QA share, a Releases share, ...
Make people working on the referenced library do CI builds that drop CI packages on the CI repository for instance, and have them picked up by other projects (who just need to do a simple update, could be automated through PowerShell in pre-build: check for new version, if so, update).
Just make sure that when products release their milestones, they also release with released dependencies (could be as simple as switching feeds, releases will always have a higher version number than CI builds).
Hope that helps!
Cheers,
Xavier
If you're working on the source code for the lib and the main app at the same time, I'd say NuGet is probably not a good solution. I think it'll only work in situations where you work with a "stable" version of the library that don't need to change frequently during the development of your main app.
That said - is it possible the development on your library could be done in isolation? You already mention you're doing TDD on the lib, so why can't that work be done, then built, deployed, then the main app work done?