I'm going straight to the point : I want to verify a solidity contract using the solc compiler, specifically i'm using solc-js (along with the standard JSON input) option to load a remote version of the compiler. Until here no problem at all. I am able indeed to verify simple contracts such as Greeter.sol.
The problem that arise is the following: whenever a contract has an import statement such as import #openzeppelin/contracts/token/ERC20/ERC20.sol;.
I cannot find a way to dynamically load the references that a contract might have.
Things I have already tried : using URLS in the input JSON --> end up in an error --> File callback import not supported
Here is a screen of the code:
This is the creation of the JSON input file, then, what follows is the actual compilation of the contract:
I am aware that there is the possibility to add a so called "import callback" but i'm can't get it to work. The docs are really poor and I am really struggling with that.
My goal would be to dynamically load every possible import statement of the input contract.
Hope the description of my problem is detailed enough!
Thanks in advance
Related
I have been searching the official docs and existing questions and could not find any information on this - in Elm, how it would be possible to see the members/methods/variables that belong to or are exposed by a package in Elm, (such as the dir method in python), without having to dive into the source code each time?
What I want to do is get a simple list of what methods are exposed by an imported package. (So for a package like List, it should output reverse , all, any, map, etc.) I have attempted tab completion in elm repl and the elm extension available in VS code editor, and elm repl does not offer any methods such as help, doc, ?, dir, man, etc., so I have no idea where to even start. I'm wondering how everyone else does this other than pulling up the source code for each and every package they use.
I apologize for the newbie question and if I misread or have been missing anything, but I couldn't even find anything in the https://elmprogramming.com tutorial. Thanks in advance.
Nothing like this exists in Elm to do reflection over modules, unfortunately (as of 0.19.1, at least).
However, if you aren't looking to actually do this kind of thing at runtime, but rather as a convenient way of finding out for development, the elm packaging system enforces the requirement that all public functions are documented, so if you visit the package page, every public function and type will be documented there (obviously it can't enforce the content of the documentation, but at the very least it will be listed).
When serving lit-element components with nollup then I keep getting the following error in the browser console that I am not able to track down:
toast-messages.js:56 Uncaught TypeError: modules[number] is not a function
at create_bindings (toast-messages.js:56)
at toast-messages.js:57
at Object.48 (toast-messages.js:365)
at create_bindings (toast-messages.js:56)
at _require (toast-messages.js:141)
at toast-messages.js:249
at toast-messages.js:251
Can anyone point me in the right direction? (I can share my rollup.config.js if required)
This error means that a module has been requested, but there's no module with the specified id (number) included in the bundle. This can happen for a variety of reasons:
There's a call using require passing an invalid id.
Passing a string or something other than a number into require.
Using a library that was compiled to CommonJS but was not transformed to ESM.
It's likely that you're missing the CommonJS plugin, or if you are, then the CommonJS isn't able to catch the require call and convert it. The latter can happen because of obfuscation in the library code. CommonJS plugins work by using static analysis. It would be very difficult for the plugin to transform the following:
var r = require;
r('my-library-code');
Without executing the code, it would be difficult for static analysis to track this. A best effort attempt can be made, but there will be always a situation where it could fail.
So here's the following steps you should take:
* Confirm that the CommonJS plugin is being used.
* If it is, check the file in node_modules for unusual patterns.
* If there is, file an issue with the CommonJS plugin maintainer and see if it's possible to solve, and if not, you might need to contact the maintainer for the toast-message library.
I do realise I'm posting this very late, but better to answer later than never! To avoid this vague error in the future, in 0.10.2 of Nollup I've added a user friendly error that list some things to check for.
Hope this helps!
I am working on a web visualisation for a given project (db with genes, proteins, task is to do some nice visualisation with springboot and thymeleaf). The project was provided with all files, yet I am missing some libraries (also leading to some errors in the code ofc).
When trying to import:
import org.apache.commons.lang3.tuple.ImmutablePair;
import org.apache.commons.lang3.tuple.Pair;
lang3 gives me a "cannot resolve symbol" error and automatically searching jars on web results in "no jars found".I found the lang3-jar manually and successfully added it to the project library. When importing:
import org.apache.commons.lang3.*;
All errors for tuple.Pair-usage are gone. Yet, usage of ImmutablePair still results in a "cannot resolve symbol" error.
Firstly, I am confused and now unsure, if my knowledge of imports is correct. I learned, if for example you import something.anything.x and something.anything.y, you could also just import something.anything.*; and x and y would be covered.
Secondly, if needed, where do I find the jars I need (I could not find them yet).
Thanks :)
So, I do not know, where this error results in, yet I do know a solution. When adding the library, select "From Maven" and search for Apache lang3 and add it. Resolves the problems.
I'm developing a simple web service which is going to add user provided facts to my Prolog database (using assert). I thought it's better to keep these dynamic facts ("data") separate from my service rules that operate on these facts ("code"), hence split them into two different modules. Main reason was that I wanted to persist the dynamic facts to disk periodically, while being able to develop the code with no issues and independently of user data.
I've been using assert(my_store:fact(...)) to add user data to the my_store module and then in the code module I started coding rules like
:- module (my_code, [a_rule/1, ...]).
a_rule(Term) :-
my_store:fact(...), ...
All seems ok but with this approach my_store is hard-coded in the code module, which is a little worrying. For example, what if after some time I decide to change data module name or, perhaps, I'll need two separate data modules one with frequent persistence, another with persistence done only sporadically?
Can anyone advise what are the best practices in code and data organisation? Perhaps separation of code and data is against "the Prolog way"? Are there any good books that cover these issues in depth?
Cheers,
Jacek
That's a good question, touching on several very important topics.
I hope that the following comments help you to sort out most of your questions, possibly more so if you follow up on the points that interest you most with new questions that address the specific question in isolation.
First, when accepting user code as input, make sure you only allow safe code to be added to your programs. In SWI-Prolog, there is safe_goal/1, which helps you to ensure some safety properties. It's not perfect, but better than nothing.
Again in SWI-Prolog, there is library(persistency). Please carefully study the documentation to see how it works, and how you can use it to store dynamic data on disk.
Regarding the module names, I have two comments:
Explicit module qualifications of goals are very rare. Simply load the module and use its clauses.
Note that you can load modules dynamically. That is, nothing prevents you from using use_module/1 with a parameter that was obtained from somewhere else. This allows you more flexibility to specify the module from which you want to fetch your definitions, without having to change and recompile your code.
Regarding separation of code and data: In Prolog, there is no such separation.
All Prolog clauses are Prolog terms.
Terms! TERMS ALL THE WAY DOWN!
Thanks to #mat for his suggestions which made me to read and think a little more. Now, I can post a potential solution to my issue; not ideal, not using the persistency library but a simple, first attempt.
As mentioned, user data are stored with assert(my_store:fact(...)). That means module my_store is created dynamically and there's no file which would allow use_module to be used. There's, however, the import/1 predicate which I can use to import dynamically asserted facts, and so my solution looks like this:
:- module(my_code, [a_rule/1, ...]).
:- initialization import_my_store.
import_my_store :-
import(my_store:fact/1),
import(my_store:another_fact/1),
...
a_rule(Term) :-
fact(...), ...
Note that I can use fact/1 without explicit specification of the my_store module. And I can also easily dump the user data to a file.
save_db(File) :-
tell(File),
my_store:listing,
told.
The downside is that on initialization the import/1 calls generate warnings such as: import/1: my_store:fact/1 is not exported (still imported into my_code). But that's not a big issue because they are still imported into my_code and I can use the user facts without explicit module specification.
Looking forward to hearing any comments. Cheers,
A solution using Logtalk, which provides an alternative to modules. First define an object with your code:
:- object(my_code).
:- public([a_rule/1, ...]).
:- private([fact/1, another_fact/1, ...]).
:- dynamic([fact/1, another_fact/1, ...]).
a_rule(Term) :-
::fact(...), ...
...
:- end_object.
Then, dynamically create any number of data stores as necessary as extensions (derived prototypes) of the my_code object:
?- create_object(my_store, [extends(my_code)], [], []).
To query a data store, simply send it a message:
?- my_store::a_rule(Term).
The create_object/4 built-in predicate can load the persistency file for the store if necessary (so that you resume where you left):
?- create_object(my_store, [extends(my_code)], [include('my_store.pl'))], []).
User data can by saved in a data store by asserting it as expected:
?- my_store::assertz(fact(...)).
You will need a predicate to dump a data store to a file as a public predicate in the my_code object. For example:
:- public(dump/0).
dump :-
self(Self),
atom_concat(Self, '.pl', File),
tell(File),
dump_dynamic_predicates,
told.
dump_dynamic_predicates :-
current_predicate(Functor/Arity),
functor(Template, Functor, Arity),
predicate_property(Template, (dynamic)),
::Template,
write_canonical(Template), write('.\n'),
fail.
dump_dynamic_predicates.
now you can dump a data store, by typing:
?- my_store::dump.
Note that with this solution is trivial to have concurrently any number of data stores. If a data store requires a specialized version of the code, then you can simply extend the code object and then create the specialized data store as an extension of that specialized object.
autoscan produced a series of function checks (AC_FUNC_* and AC_CHECK_FUNCS), and AC_CHECK_HEADERS. Now I need to pull in gnulib code. Do I just try to import every function and header that autoscan identified? Half of them don't show up in the gnulib-tool --list output. Or do I wait until ./configure fails on some platform?
My suggestion would be to ignore autoscan entirely and instead try to figure out which platform you're targeting.
In particular autoscan will try to add tests for functions that are available in every single platform since 1990, but might be missing on a non-SysV OS for instance.
Start with a base platform target (such as C99 and the latest POSIX), and then figure out which other OSes you want to target and what they are missing.
Don't try to cover bases for OSes you have no access to, it's likely you won't actually know what is needed for them to build, let alone run.
Full disclosure: I have changed my opinion on gnulib over the years and not suggest it to be used anymore, if at all possible.
There are two ways to determine which list of gnulib modules you might need:
You can scan your source code or your object files (with nm) by hand.
You can import all the POSIX header file modules via gnulib-tool, then add -DGNULIB_POSIXCHECK to the CFLAGS, compile your package, and analyze the resulting warnings.
Also, some gnulib overrides exist for the sake of old platforms only. You can read the list of what each override fixes, and apply your own judgement.