When I run the following query:
match (n) return distinct labels(n);
I am seeing the following error:
DynamicRecord[396379,used=false,(0),type=-1,data=byte[],start=true,next=-1] not in use
Other people have asked how to deal with this situation. I am asking a different set of questions: what is a DynamicRecord in Neo4j? And, what can be done to avoid this type of error?
What is DynamicRecord
The source for DynamicRecord is here. This is largely useless.
Anyhow, all I can gather is that it is:
It is a very low-level construct in store kernal.
A multitude of tests use it with relation to consistency checking.
It appears to be a record that is dynamically created (meaning, at run time - not stored on disk), and it can represent different type of data (property blocks, schema, etc.)
This is also largely useless. I know.
What can be done to avoid this type of error.
This seems to be a very generic error, but most online resources (Github issues / SO questions) seem to relate to DB upgrades. Some pointed out in changes to some consts used by DynamicRecord that yield data-corruption after upgrades.
Based on that, I guess that the following steps could prevent such error:
Backup your data.
Migrate your data properly when upgrading.
Do not use different versions of neo against the same data.
You've guessed it - This is also rather useless, but I hope it is better than nothing.
Related
I do not know how to implement a proprietary handling procedure of user defined errors (routines/algorithm stop) or warnings messages (routine/algorithm can proceed) without using Exceptions (i.e. failwith .... Standard System Exception).
Example: I have a Module with a series of Functions that uses a lot of input data to be checked and to be used to calculate the thickness of a Pressure Vessel Component.
The calculation procedure is complex, iterative and there are a lot of checks to be performed before getting a result, check that can generate "User Defined Errors" that stop the procedure/routine/algorithm or generate a "Warning Messages" proceeding on.
I need to collect these Errors and Messages to be shown to the User in a dedicated form (Wpf or Windows form). This later at the end.
Note: every time that I read a books of F# or C# or Visual basic or an Article in Internet, I found the same Phylosophy/Warning: raise of System/User-Defined Exception should be limited as much as possible: Exception are for unmanageable Exceptional Events ( not predictable) and cause "overload" to the Computer System.
I do not know which handling philosophy to implement. I'm confused. Limited sources available on internet on this particular argument.
Actually I'm planning to adopt this Phylosophy , taken from: "https://fsharpforfunandprofit.com/posts/recipe-part2/". It sounds good for me, ... complex, but good. No other references I was able to go find on this relevant argument.
Question: there are other Phylosophies that I can consider to create this Proprietary handling/collecting of user defined errors? Some books to read or some articles?
My decision will give a big impact on how to design and write my code (splitting problem in several functions, generate a "motor" that run in sequence functions or compose then in different ways depending on results, where to check for Errors/Warnings, how to store Errors and Warning Messages to understand what is going on or where "Errors/Warnings" are genetate and caused "By Which Function"?).
Many thanks in advance.
The F# way is to encode the errors in the types as much as possible. The easiest example is an option type where you would return None if the operation failed ans Some value when it succeeded. Surprisingly, very often this is enough! If not, then you can encode different types of errors AND a success "state" in a discriminated union, e.g.
[<Measure>]
type psi
type VesselPressureResult =
| PressureOk
| WarningApproachingLimit
| ErrorOverLimitBy of int<psi>
and then you will use pattern matching to "decide" what to do in each case. If you need to add more variants, e.g. ErrorTooLow, then you would add that to the DU and then the compiler will "tell" you about all places where you need to fix the logic.
Here is the perfect source with detailed information: https://fsharpforfunandprofit.com/series/designing-with-types.html
The problem
Ok, sorry that my question is somewhat abstract and subjective, but will try to make it as specific as possible. So, the situation I am in is simple - I am remaking a very old MS Access application on a new website using ASP.NET MVC. As currently the MVC site is using SQL Server 2008 (for many well known reasons) I need to find a way to migrate the tables AND the data, because the information in the old database will be used in the new application.
Alright, so far so good, however there are a few problems. The old application is written in a different language, meaning that I want to translate table names, field names, and all other names that are there to English. Furthermore, I will be making some changes on the models themselves (change the type of some fields, add additional fields to some tables, remove old unnecessary ones and more). So technically I'll be 'having my way' with everything.
Researched solutions
With those things in mind I researched for the ways to migrate data from Access database to a SQL Server. Of course, there is a lot of information on the matter, in Stack Overflow alone there are more than a few questions and solutions. So why am I struggling to find the answer ? Well I found a few solutions that will be sufficient to some extend (actually will definitely solve my problems) but I am writing to ask if someone experienced has a better perspective on it than I do. Alright, the solutions and why I am still looking for advice: /I'll be listing just a couple of the most common and popular ones that I found, many of the others share the same capabilities and/or results /
Upsize Wizzard (Access) - this is a tool devised specifically for migrating tables and data from Access. It is my most favourite one for the moment as I find it kind of straightforward to work with and it provides good overall results. I was able to migrate the tables to SQL Server (along with the data of course) which more or less is what I am intending to do. It is fast, it seems like it allows you to migrate indexes, primary keys and even to my knowledge foreign keys (table relationships). The downsides of this tool, however, include that it ignores your queries (which I don't really need honestly) and it doesn't provide a way to change the model, names or types of the properties of the table you migrate - which is the thing I kind of prefer, because I will have to make more than a few changes, adding, renaming, deleting, etc. And then continue with the development process (of the application) which will lead to a few additional minor changes. And finally I would need to apply all changes (migration + all changes) on the production server, which overall is prone to mistakes as I will be doing it by hand (and there are more than a few tables).
SQL Server Migration Assistant (SSMA) - ok, this is a separate tool (not included in Access) with again the same idea - to migrate data from Access to ... possibly everywhere, haven't researched that. Overall it offers more functionality and customizing from the Upsize Wizard, but of course it does it in a more complicated way. I haven't put enough effort to make a migration with this tool yet, as it involves a lot of installations and additional work, but according to my research it provides almost all (if not all) of the functionality I require. The downside however comes with the naming. As I mentioned it allows you to apply changes on the tables, schema, fields, indexes, keys and probably everything, but the articles advice that I change the names in Access first, as it will be easier and the migration process will run more smoothly. I am not allowed to make changes on the original Access database, as it will remain functional until the publish of the 'renewed' project, and the data inside it is being used, so a mere copy of the file is a solution I am not particularly fond of, because I might loose new records. Also I cant predict the changes I would want to make in the development process (as I said I believe I would want/need to apply some additional changes later on when I find 'weaknesses' in my data design in the development process) so I find it to be a little half baked solution.
Conclusion
The options presented, the way I see them, are two:
Use the Upsize Wizard to migrate the access tables, then write a script that applies the changes I want to make. Then in the development process add any additional changes to the script. When ready to publish on the production server, reapply the migration with the wizard, run the changes script and pray everything is fine.
Get more involved with the SSMA tool and try producing an updated version of the tables with the migration process. (See how efficient the renaming is and decide whether to use copied file to rename and then find a way to migrate only new records or do it all in the SSMA). Then again write a script for the changes that occur in the development process and re-do and apply it all on the production server when ready and then pray everything is fine.
Option I have not yet seen, apply it and then pray everything is fine.
I have researched the matter for a couple of days now, and found a few more solutions that I do not believe are better by the mentioned. However I include the possibility of missing the 'big red X on the map', a practical and easy solution which seems like it was designed specifically for me (though I doubt that a little). Anyway, reducing all the madness that I have written so far to a few simple questions will look like:
Is anyone aware if my conclusions are correct? I am leaning towards option one as it is easier to accomplish.
Has anyone experienced/found a better way to do that, or just found some 'logic-leaps' in my writings as I am overthinking the entire thing a little and may be doing some obvious miscalculation.
Very sorry for asking a trivial question and one that includes decision making that may involve deeper understanding of my project and situation, yet I am working with rather sensitive data and would appreciate feedback, even if only to improve my confidence into the chosen approach.
There is one other tool/method you might want to consider that seems to cater to your specific needs more. This would be to use the data import/export tool that ships with sqlserver to do a complete copy of all data into a temporary location within sql server and then write custom queries to reorganize the names and other changes you want to make. Is a bit more work but you could use the end product as a seed method for your migrations ;) (if you are doing code first anyway)
I am upgrading a webapp that will be using two different database types. The existing database is a MySQL database, and is tightly integrated with the current systems, and a MongoDB database for the extended functionality. The new functionality will also be relying pretty heavily on the MySQL database for environmental variables such as information on the current user, content, etc.
Although I know I can just assemble the queries independently, it got me thinking of a way that might make the construction of queries much simpler (only for easier legibility while building, once it's finished, converting back to hard coded queries) that would entail an encapsulation object that would contain:
what data is being selected (including functionally derived data)
source (including joined data, I know that join's are not a good idea for non-relational db's, but it would be nice to have the facility just in case, which can be re-written into two queries later for performance times)
where and having conditions (stored as their own object types so they can be processed later, potentially including other select queries that can be interpreted by whatever db is using it)
orders
groupings
limits
This data can then be passed to an interface adapter that can build and execute the query, returning it in an array, or object or whatever is desired.
Although this sounds good, I have no idea if any code like this exists. If so, can anybody point it out to me, if not, are there any resources on similar projects undertaken that might allow me to continue the work and build a basic version?
I know this is a complicated library, but I have been working on this update for the last few days, and constantly switching back and forth has been getting me muddled at times and allowing for mistakes to occur
I would study things like the SQL grammar: http://www.h2database.com/html/grammar.html
Gives you an idea of how queries should be constructed.
You can study existing libraries around LINQ (C#): https://code.google.com/p/linqbridge/
Maybe even check out this link about FQL (Facebook's query language): https://code.google.com/p/mockfacebook/issues/list?q=label:fql
Like you already know, this is a hard problem. It will be a big challenge to make it run efficiently. Maybe consider moving all data from MySQL and Mongo to a third data store that has a copy of all the data and then running queries against that? Replicating all writes to something like Redis or Elastic Search and then write your queries there?
Either way, good luck!
I know ! The "proper" way to read STXL.CLUSTD is through SAP ABAP function. But I'm sorry, we are suffering badly from performance problem. We have already make our decision to go directly to the database (Oracle), and we don't have any plan to revert our decision yet since everything goes so much better so far.
However, we've came across this issue. The text in STXL.CLUSTD field was stored in an incomprehensible format. We cannot find any information about its encoding format via google. Anybody can hint me how to decode text from STXL.CLUSTD ?
Thanks
Short version: You don't. Use the function module READ_TEXT.
Long version: You're looking at a so-called cluster table. See http://help.sap.com/saphelp_47x200/helpdata/en/fc/eb3bf8358411d1829f0000e829fbfe/frameset.htm for the details. The data you see is an internal representation of the text, somehow related to the way the ABAP kernel handles the data internally. This data does not make any sense without the metadata. If you change the original structure in an incompatible way, the data can no longer be read. Oh, and did I mention that the data does not contain a reference to the metadata? When reading the contents of these tables, even in ABAP, you need to know the original source data structure, otherwise you're doomed. Without the metadata and the knowledge of how the kernel handles these data types at runtime, you'll have a hard time deciphering the contents.
Personal opinion: Direct access to the database below the SAP R/3 system is a really bad idea since this not only bypasses all safety measures, but it also makes you very vulnerable to all structural changes of the database. The only real reason for accessing the database directly is not lack of performance, but lack of (ABAP) knowledge, and that should be curable :-)
You can definitely read clusters and pools without running any ABAP code, or invoking RFC's or BAPI's, etc. it is a very good approach, highly performant, and easy to use.
I don't like people flogging their products in StackOverflow, but the information that you must use ABAP to access SAP data has been outdated for over 7 years now.
Thanks,
Bill MacLean
I just noticed this thread and I work for Simplement. Snow_FFFF is correct (BTW, that user is not me, and ASFAIK is not anyone in our company). The Data Liberator product has been de-clustering and de-pooling tables (and many other things) for our customers since 2009.
This is going to be both a direct question and a discussion point. I'll ask the direct question first:
Can a stored procedure create another stored procedure dynamically? (Personally I'm interested in SQL Server 2008, but in the interests of wider discussion will leave it open)
Now to the reason I'm asking. Briefly (you can read more elsewhere), User Defined Scalar Functions in SQL Server are performance bottlenecks, at best. I've seen uses in our code base that slow the total query down by 3-4x, but from what I've read the local impact of the S-UDF can be 10x+
However, UDFs are, potentially, great for raising abstraction levels, reducing lots of tedious boilerplate, centralising logic rules etc. In most cases they boil down to simple expressions that could easily be expanded inline - but they're not (I'm really only thinking of non-querying functions - e.g. string manipluations). I've seen a bug report for this to be addressed in a future release - with some buy-in from MS. But for now we have to live with the (IMHO) broken implementation.
One workaround is to use a table value UDF instead - however these complicate the client code in ways you don't always want to deal with (esp. when the UDF just computes the result of an expression).
So my crazy idea, at first, was to write the procs with C Preprocessor directives, then pass it through a preprocessor before submitting to the RDBMS. This could work, but has its own problems.
That led me to my next crazy idea, which was to define the "macros" in the DB itself, and have a master proc that accepts a string containing an unprocessed SP with macros, expands the macros inline, then submits it on to the RDMS. This is not what SPs are good at, but I think it could work - assuming you can do this in the first place - hence my original question.
However, now I have explained my path to the question, I'd also like to leave it open for other ideas. I'm sure I'm not the only one who has been thinking along these lines. Perhaps there are third-party solutions already out there? My googling has not turned up much yet.
Also I thought it would be a fun discussion topic.
[edit]
This blog post I found in my research describes the same issue I'm seeing. I'd be happy if anyone could point out something that I, or the blog poster, might be doing wrong that leads to the overhead.
I should also add that I am using WITH SCHEMABINDING on my S-UDF, although it doesn't seem to be giving me any advantage
your string processing UDF won't be a perf problem. Scalar UDF's are a problem only when they perform selects and those selects are done for every row. this in turn spikes the IO.
string manipulaation on the other hand is done in memory and is fast.
as for your idea i can't really see any benefit of it. creating and dropping objects like that can be an expensive operation and may lead to schema locking.