I'm developing a simple web service which is going to add user provided facts to my Prolog database (using assert). I thought it's better to keep these dynamic facts ("data") separate from my service rules that operate on these facts ("code"), hence split them into two different modules. Main reason was that I wanted to persist the dynamic facts to disk periodically, while being able to develop the code with no issues and independently of user data.
I've been using assert(my_store:fact(...)) to add user data to the my_store module and then in the code module I started coding rules like
:- module (my_code, [a_rule/1, ...]).
a_rule(Term) :-
my_store:fact(...), ...
All seems ok but with this approach my_store is hard-coded in the code module, which is a little worrying. For example, what if after some time I decide to change data module name or, perhaps, I'll need two separate data modules one with frequent persistence, another with persistence done only sporadically?
Can anyone advise what are the best practices in code and data organisation? Perhaps separation of code and data is against "the Prolog way"? Are there any good books that cover these issues in depth?
Cheers,
Jacek
That's a good question, touching on several very important topics.
I hope that the following comments help you to sort out most of your questions, possibly more so if you follow up on the points that interest you most with new questions that address the specific question in isolation.
First, when accepting user code as input, make sure you only allow safe code to be added to your programs. In SWI-Prolog, there is safe_goal/1, which helps you to ensure some safety properties. It's not perfect, but better than nothing.
Again in SWI-Prolog, there is library(persistency). Please carefully study the documentation to see how it works, and how you can use it to store dynamic data on disk.
Regarding the module names, I have two comments:
Explicit module qualifications of goals are very rare. Simply load the module and use its clauses.
Note that you can load modules dynamically. That is, nothing prevents you from using use_module/1 with a parameter that was obtained from somewhere else. This allows you more flexibility to specify the module from which you want to fetch your definitions, without having to change and recompile your code.
Regarding separation of code and data: In Prolog, there is no such separation.
All Prolog clauses are Prolog terms.
Terms! TERMS ALL THE WAY DOWN!
Thanks to #mat for his suggestions which made me to read and think a little more. Now, I can post a potential solution to my issue; not ideal, not using the persistency library but a simple, first attempt.
As mentioned, user data are stored with assert(my_store:fact(...)). That means module my_store is created dynamically and there's no file which would allow use_module to be used. There's, however, the import/1 predicate which I can use to import dynamically asserted facts, and so my solution looks like this:
:- module(my_code, [a_rule/1, ...]).
:- initialization import_my_store.
import_my_store :-
import(my_store:fact/1),
import(my_store:another_fact/1),
...
a_rule(Term) :-
fact(...), ...
Note that I can use fact/1 without explicit specification of the my_store module. And I can also easily dump the user data to a file.
save_db(File) :-
tell(File),
my_store:listing,
told.
The downside is that on initialization the import/1 calls generate warnings such as: import/1: my_store:fact/1 is not exported (still imported into my_code). But that's not a big issue because they are still imported into my_code and I can use the user facts without explicit module specification.
Looking forward to hearing any comments. Cheers,
A solution using Logtalk, which provides an alternative to modules. First define an object with your code:
:- object(my_code).
:- public([a_rule/1, ...]).
:- private([fact/1, another_fact/1, ...]).
:- dynamic([fact/1, another_fact/1, ...]).
a_rule(Term) :-
::fact(...), ...
...
:- end_object.
Then, dynamically create any number of data stores as necessary as extensions (derived prototypes) of the my_code object:
?- create_object(my_store, [extends(my_code)], [], []).
To query a data store, simply send it a message:
?- my_store::a_rule(Term).
The create_object/4 built-in predicate can load the persistency file for the store if necessary (so that you resume where you left):
?- create_object(my_store, [extends(my_code)], [include('my_store.pl'))], []).
User data can by saved in a data store by asserting it as expected:
?- my_store::assertz(fact(...)).
You will need a predicate to dump a data store to a file as a public predicate in the my_code object. For example:
:- public(dump/0).
dump :-
self(Self),
atom_concat(Self, '.pl', File),
tell(File),
dump_dynamic_predicates,
told.
dump_dynamic_predicates :-
current_predicate(Functor/Arity),
functor(Template, Functor, Arity),
predicate_property(Template, (dynamic)),
::Template,
write_canonical(Template), write('.\n'),
fail.
dump_dynamic_predicates.
now you can dump a data store, by typing:
?- my_store::dump.
Note that with this solution is trivial to have concurrently any number of data stores. If a data store requires a specialized version of the code, then you can simply extend the code object and then create the specialized data store as an extension of that specialized object.
Related
The Scenario
I've been using templates in Cro (documented at https://cro.services/docs/reference/cro-webapp-template), and have enjoyed that there are subs in them.
I currently have a 'main' template, and some reports, let's say report1, report2, and report3.
Let's say that, from report3, I want to include an array of report1.
Now, let's say that the reports each have the following subs:
init: Some Javascript initialisation code (that should only be included once, no matter how many instances of the report are used)
HTML: Some HTML code that should be included for each instance of the report (with a few parameters to differentiate it, but that, due to the restriction of the Javascript framework, may not contain any <script> or <style> tags
data: A snippet of Javascript that again has to be repeated for each time the report is included
Currently I have each of the above in a separate sub in the template.
The Problem
Redeclaration of symbol '&__TEMPLATE_SUB__report-initial'.
The Question
While I can pass a report name (eg. "report1") to the main template, what I'm lacking is a way to have the main template call the subs on the report name that has been passed in, since there may be multiple reports involved.
Ideas I've tried
What would be ideal is if I could somehow create a "report" class that inherits from the template, and pass instances of the template class into the main report, and then call the subs as methods on the report class. However, I've been unable to figure out a way to do this.
I can see three likely options here:
My difficulty may be that I'm not thinking "The Cro Way". If there's a better way to achieve what I'm trying to do, please let me know
There may be a way to achieve what I want, and I've just been unable to understand the documentation (or it may be missing)
While unlikely, it's possible that Cro hasn't been designed with this kind of possibility in mind.
Any help anyone can provide would be appreciated.
Thanks!
Edit: I think a macro that can have multiple (named) "bodies" would solve the problem.
It looks like &__TEMPLATE_SUB__report1-initial is a global that is redeclared when you import report1 into report3. May I suggest to try and use template fragments instead of the whole template?
my initial response to your question is "please can you provide a minimal reproducible example of your code so that we can get a deeper view of the context and have something that we can experiment with"
my current understanding of what you need is "to use raku style classes & objects (with callbacks) in a Cro template setting" - and that the standard ways of doing this such as associative access to a nested topic variable are too limited
in itself, this is not necessarily a weakness of raku / Cro in that the power of a template slang needs to be limited to avoid potential security issues and, as with most template systems, is a bit more prosaic than a full blown coding language
my guess is that Cro template-parts which can chunk up web parts and steps in and out of the (real raku) root block can, depending on how you chunk things up, handle the report data structure that you describe - have you tried this?
if this is still not tenable, there are a couple of ways to expand the options such as dependency injection and route handlers
I'm building a single page application in Elm and was having difficulty deciding how to split my code in files.
I ended up splitting it using 1 module per page and have Main.elm convert the Html and Cmd emitted by each page using Cmd.map and Html.map.
My issue is that the documentation for both Cmd.map and Html.map says that :
This is very rarely useful in well-structured Elm code, so definitely read the section on structure in the guide before reaching for this!
I checked the only 2 large apps I'm aware of :
elm-spa-example uses Cmd.map (https://github.com/rtfeldman/elm-spa-example/blob/cb32acd73c3d346d0064e7923049867d8ce67193/src/Main.elm#L279)
I was not able to figure out how https://github.com/elm/elm-lang.org
deals with the issue.
Also, both answers to this stackoverflow question suggest using Cmd.map without second thoughts.
Is Cmd.map the "right" way to split a single page application in modules ?
I think sometimes you just have to do what's right for you. I used the Cmd.map/Sub.map/Html.map approach for an application I wrote that had 3 "pages" - Initializing, Editing and Reporting.
I wanted to make each of these pages its own module as they were relatively complicated, each had a fair number of messages that are only relevant to each page, and it's easier to reason about each page independently in its own context.
The downside is that the compiler won't prevent you from receiving the wrong message for a given page, leading to a runtime error (e.g., if the application receives an Editing.Save when it is in the Reporting page, what is the correct behavior? For my specific implementation, I just log it to the console and move on - this was good enough for me (and it never happened anyway); Other options I've considered include displaying a nasty error page to indicate that something horrible has happened - a BSOD if you will; Or to simply reset/reinitialize the entire application).
An alternative is to use the effect pattern as described extensively in this discourse post.
The core of this approach is that :
The extended Effect pattern used in this application consists in definining an Effect custom type that can represent all the effects that init and update functions want to produce.
And the main benefits :
All the effects are defined in a single Effect module, which acts as an internal API for the whole application that is guaranteed to list every possible effect.
Effects can be inspected and tested, not like Cmd values. This allows to test all the application effects, including simulated HTTP requests.
Effects can represent a modification of top level model data, like the Session 3 when logging in 3, or the current page when an URL change is wanted by a subpage update function.
All the update functions keep a clean and concise Msg -> Model -> ( Model, Effect Msg ) 2 signature.
Because Effect values carry the minimum information required, some parameters like the Browser.Navigation.key are needed only in the effects perform 3 function, which frees the developer from passing them to functions all over the application.
A single NoOp or Ignored String 25 can be used for the whole application.
I am attempting to use jsonix to unmarshal xml response from an SOS DescribeSensor request. In the broader scope I am going to be using jsonix to unmarshal all responses from SOS, particularly 2.0. I noticed that the response uses SML or SensorML namespace and so I added the extra module dependencies and sub-dependencies (namely GML_3_1_1, SWE_1_0_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, and of course SensorML_1_0_1). Before I added these I noticed the return was a generic json (see first screenshot, particularly near sml:physicalsystem). After I added the dependencies I got an error in my console during part of the unmarshaling process which I do not understand (see second screenshot). Here is a link to the xml response from the server for reference. https://drive.google.com/file/d/0B8LdnPVJpHz7M3VGb0FZc2lQcjQ/view?usp=sharing. I would really like to understand if this has anything to do with the ordering of the modules when I create the context though I believe it is fine. Once the solution to this is discovered I have two follow up questions.
Is it reasonable to expect (in general) that using the modules built from the ogc-schemas on the highsource github page should allow me to handle all responses via jsonix? i.e. every element will always be mapped to a defined type. I know these schemas/mappings are very complicated.
Are there any other tools I can use to verify the modules or validate them against schemas to make life easier rather than tracking down elements on an individual basis or tracing through various module files when jsonix seems to parse incorrectly?
Thanks in advance - Richard3d
var context = new Jsonix.Context([XLink_1_0, GML_3_2_1, IC_2_0, SMIL_2_0, SMIL_2_0_Language, GML_3_1_1, SWE_1_0_1, SensorML_1_0_1, OWS_1_1_0, SWE_2_0, SWES_2_0, WSN_T_1, WS_Addr_1_0_Core, OM_2_0, ISO19139_GMD_20070417, ISO19139_GCO_20070417, ISO19139_GSS_20070417, ISO19139_GTS_20070417, ISO19139_GSR_20070417, Filter_2_0, SOS_2_0]);
Disclaimer: I am the author of jsonix and main dev of ogc-schemas.
First of all, you're on the right track, stay on it.
Yes, if you have all the required mappings then you should get a "nice" JSON with all the properties with specific types, cardinatilities etc.
The goal of Jsonix is to provide bi-directional XML<->JSON conversion with deterministic structure, types and cardinalities.
The goal of OGC Schemas is to provide JAXB and Jsonix mappings for all of the OGC Schemas.
So togethere these two should allow to transform any OGC XMLs from/to JSON.
"Generic JSON" was actually just DOM. If a property allows DOM and Jsonix does not have mapping for certain element, it is just taken as DOM. You were just missing SensorML mappings.
You're right the structure of schema dependencies is very complex. But this is something we should take to OGC. :) It's a bit crazy that you need, like, a dozen of schemas to read sensor data. I was actually intending to build automatic loading of dependencies but did not yet implement this feature.
The next GML_3_1_1.AbstractFeatureType problem is probably this issue. Try changing the order of mappings (move GML_3_1_1 to the earlier places). Actually the order of mappings should not be significant, but, well, there's a bug.
Tools to cross-check - no, probably not. My approach is to do roundtrip tests (unmarshal-marshal-unmarshal-check equality). From experience, there are normally a couple of caveats at the start, but then it works by design. Of course there are bugs in Jsonix and there may be problems with mappings, but this gets sorted out.
Also feel to create a support project here:
https://github.com/highsource/jsonix-support
For instance https://github.com/highsource/jsonix-support/s/sos.
Here's an example of such a support project:
https://github.com/highsource/jsonix-support/tree/master/l/lightstalker89
I need this because just downloading XML from Google Drive (a) takes me effort to set up the support project (b) legally dangerous as I have not idea where this XML comes from and if I have rights/license to add these files to my test suites.
This is my first time asking a question here so, be nice;p .. I'm working with Magento (and Zend Framework) for the first time and I'm trying to build a custom grid that will populate based off of a manually written query. I'm trying to extend the Mage_Core_Model_Mysql4_Collection_Abstract to allow a query to be loaded into it and then parse the select fields in the extended Grid class to make it dynamic... Is this even possible or am I beating a dead horse? I've been at it for a week now and I'm not getting anywhere. The problem seems to be that inside the __Model_mysql4_Collection class has to be initialized with a resource model using _init() in the constuct
As a learning exercise use the module creator to make an admin grid page and see how that is done. Or even modify it's output to get what you need.
There will be a grid container block, a grid block (with _prepareCollection and _prepareColumns methods), a model, a resource model (representing a single record) and a collection resource model (representing several records).
Providing your own _init methods shouldn't be any sort of problem. Perhaps you'd like to post your config.xml and code so far as a pastebin or something.
I found the Wikipedia entry on the soft coding anti-pattern terse and confusing. So what is soft coding? In what settings is it a bad practice (anti-pattern)? Also, when could it be considered beneficial, and if so, how should it be implemented?
Short answer: Going to extremes to avoid Hard Coding and ending up with some monster convoluted abstraction layer to maintain that is worse than if the hard coded values had been there from the start. i.e. over engineering.
Like:
SpecialFileClass file = new SpecialFileClass( 200 ); // hard coded
SpecialFileClass file = new SpecialFileClass( DBConfig.Start().GetConnection().LookupValue("MaxBufferSizeOfSpecialFile").GetValue());
The main point of the Daily WTF article on soft coding is that because of premature optimization and fear a system that is very well defined and there is no duplicated knowledge is altered and becomes more complex without any need.
The main thing that you should keep in mind is if your changes actually improve your system and avoid to lightly label something as anti-pattern and avoid it by all means. Configuring your system and avoiding hardcoding is a simple cure for duplicated knowledge in your system (see point 11 : "DRY Don't Repeat Yourself" in The Pragmatic Programmer Quick Reference Guide) This is the driving need behind the suggestion of avoiding hardcoding. I.e. there should be ideally only one place in you system (that would be code or configuration) that should be altered if you have to change something as simple as an error message.
Ola, a good example of a real project that has the concept of softcoding built in to it is the Django project. Their settings.py file abstracts certain data settings so that you can make the changes there instead of embedding them within your code. You can also add values to that file if necessary and use them where necessary.
http://docs.djangoproject.com/en/dev/topics/settings/
Example:
This could be a snippet from the settings.py file:
num_rows = 20
Then within one of your files you could access that value:
from django.conf import settings
...
for x in xrange(settings.num_rows):
...
Soft-coding: it is process of inserting values from external source into computer program. like insert values through keyboard, command line interface. Soft-coding considered as good programming practice because developers can easily modify programs.
Hard-coding. Assign values to program during writing source code and make executable file of program.Now, it is very difficult process to change or modify the program source code values. like in block-chain technology, genesis block is hard-code that cannot changed or modified.
The ultimate in softcoding:
const float pi = 3.1415; // Don't want to hardcode this everywhere in case we ever need to ship to Indiana.