When using Emacs to create a SQL query in SQL-mode, the SQLi buffer is the typical, ugly console window of the command-line tool for the target database. Most of the output is the same with ASCII characters trying to build a visual representation of a table.
Has anyone created a minor mode to make the output more readable? Here are some features that could be useful:
Create a Header. I'm not sure how this could be done, but it would be very cool if there was a way to visually show the user which columns go with the data. If I'm looking at line 300 of an output, it's a bit challenging to count the columns and read the SQL that was used. Maybe it would be something like a hacked up mode-line that doesn't change when scrolling vertically, but does when scrolling horizontally.
Only Show the Last Result. When I run a new query, that's all I want to see in the SQLi window. It'd be nice to have a feature to jump through the history of results from previous SQL queries.
Faces. Create different colors for grid lines and data. Maybe even different colors for different data types (maybe this is too hard)
I think these features would make Emacs more palatable for database developers. There are a ton of packages that do these these same tasks and others well, but they aren't nearly as powerful as Emacs. Also, it's quite annoying to switch editors just to write my SQL queries.
EDIT: Something like hexl-mode would be very cool.
All the SQL mode related wisdom of the Emacs community is probably gathered here. Sadly there is nothing like the mode you wish for - I've been wishing for something like that for quite some time and that's the reason I use sql-mode only for simple queries - the results for anything big is totally unreadable. Hopefully the situation will change in the future...
This issue convinced me to switch to jEdit, at least for database projects. It's a bit difficult to configure, but the SQL integration does exactly what I needed. Also, instead of LISP it uses Java. I highly recommend it.
Related
Just a bit of background on where my question is coming from: my company has multiple databases across the globe that uses the same schema and once of my department's responsibility is to monitor and make sure all these DBs are in sync from a schema SQL change perspective.
Now, my question is if anyone knows of any Software/tool that has a a Frontend UI which is able to do the following (the lower number the more important to have):
Able to track what SQL code change was applied on which database and when. Basically, if we write a SQL query that changed the structure of a table and we need it applied to 80% or 100% percent of the DBs, either via manual input or some automatic check the tool will tell me that yes, this was indeed applied.
Code distribution tool: we give it the query or a file that contains the code and it's able to push to the Databases it needs to (and create the audit log for that)
Code/object repository: keeps track of what was custom developed and pushed to the databases
I know SSIS might be able to do some of these things, but we need a tool that also has a simple frontend interface that can be accessed by non-IT personnel. (*clarification: we are not planning on giving non-DBA people access to change things, just to the audit aspect of said tool)
I've tried searching the internet, but i have a feeling i'm not using the right vocabulary to get the results i'm looking for.
Hence i wanted to see if the community was aware of any such tool or something similar?
Try searching for one of these two types of systems:
Release/Build/Deployment Automation Complex programs like Serena that have modules for pushing, tracking, and auditing any kind of software, anywhere. These will include all the GUI bells and whistles. But you'll have to deal with extra databases, configuration, agents, workflows, consultants(?), etc. These programs are geared more towards developers.
Remote Execution/Configuration Management Simpler programs like Salt, Fabric, and Ansible that let you run operating system commands anywhere. They don't offer as many features, and you have to do more of the work yourself, but in some ways that's liberating. If you know exactly what commands you want to run you don't need some other program holding your hand. These programs are geared more towards administrators.
From a database administrator's point of view, the main problem with those types of programs is that none of them are relational. Yes they can connect to a database and run a script, but none of them really speak SQL. Their native languages are Java, XML, SSH, etc. There's nothing wrong with those technologies, but if you only care about databases you don't want to deal with all that complexity.
If you're not happy with either of those types of programs I recommend you look at my open source program Method5. It is a remote execution program built as an extension to Oracle SQL. It works entirely inside an Oracle database, so you can install it yourself and won't need any additional websites, agents, configuration files, GUIs, etc.
Based on your comment about getting bogged down by links, and my answer to your question about half a year ago, I think this is the kind of program you were gradually heading towards creating. It took my team a couple thousand hours of developing and testing to get it right so you were probably wise to give up on making your own.
To specifically answer your requirements:
Tracking Changes are stored in an audit trail. But more importantly it has the ability and a pre-built script to compare an unlimited number of schemas, all in one view. At the end of the day what you really want to know is "are my schemas the same", not necessarily "did the same thing get run everywhere?".
Code Distribution If you just have SQL or PL/SQL, deploying it through Method5 is as easy as it can possibly get. Just specify what you want to run, and where you want to run it, like this: select * from table(m5('create index ...', 'dev, qa, prodDB1, prodDB2')); The program does not (yet) run SQL*Plus scripts. But when you have the ability to run SQL and PL/SQL so easily there's little need for SQL*Plus.
Code Repository All executions are stored in a simple table, M5_AUDIT. It contains the code, who ran it, where they ran it, and how they ran it. It wasn't designed to be a repository like SVN but it's good enough for simple auditing and tracking code.
Method5 does not contain a GUI but in some ways I consider that to be a feature. Since everything is done relationally, everything is in a simple table. You can use any of your existing GUIs - Toad, PL/SQL Developer, Excel, Apex, etc. It's a robust back-end solution that will hopefully make a good foundation for easily building a simple front end.
The problem
Ok, sorry that my question is somewhat abstract and subjective, but will try to make it as specific as possible. So, the situation I am in is simple - I am remaking a very old MS Access application on a new website using ASP.NET MVC. As currently the MVC site is using SQL Server 2008 (for many well known reasons) I need to find a way to migrate the tables AND the data, because the information in the old database will be used in the new application.
Alright, so far so good, however there are a few problems. The old application is written in a different language, meaning that I want to translate table names, field names, and all other names that are there to English. Furthermore, I will be making some changes on the models themselves (change the type of some fields, add additional fields to some tables, remove old unnecessary ones and more). So technically I'll be 'having my way' with everything.
Researched solutions
With those things in mind I researched for the ways to migrate data from Access database to a SQL Server. Of course, there is a lot of information on the matter, in Stack Overflow alone there are more than a few questions and solutions. So why am I struggling to find the answer ? Well I found a few solutions that will be sufficient to some extend (actually will definitely solve my problems) but I am writing to ask if someone experienced has a better perspective on it than I do. Alright, the solutions and why I am still looking for advice: /I'll be listing just a couple of the most common and popular ones that I found, many of the others share the same capabilities and/or results /
Upsize Wizzard (Access) - this is a tool devised specifically for migrating tables and data from Access. It is my most favourite one for the moment as I find it kind of straightforward to work with and it provides good overall results. I was able to migrate the tables to SQL Server (along with the data of course) which more or less is what I am intending to do. It is fast, it seems like it allows you to migrate indexes, primary keys and even to my knowledge foreign keys (table relationships). The downsides of this tool, however, include that it ignores your queries (which I don't really need honestly) and it doesn't provide a way to change the model, names or types of the properties of the table you migrate - which is the thing I kind of prefer, because I will have to make more than a few changes, adding, renaming, deleting, etc. And then continue with the development process (of the application) which will lead to a few additional minor changes. And finally I would need to apply all changes (migration + all changes) on the production server, which overall is prone to mistakes as I will be doing it by hand (and there are more than a few tables).
SQL Server Migration Assistant (SSMA) - ok, this is a separate tool (not included in Access) with again the same idea - to migrate data from Access to ... possibly everywhere, haven't researched that. Overall it offers more functionality and customizing from the Upsize Wizard, but of course it does it in a more complicated way. I haven't put enough effort to make a migration with this tool yet, as it involves a lot of installations and additional work, but according to my research it provides almost all (if not all) of the functionality I require. The downside however comes with the naming. As I mentioned it allows you to apply changes on the tables, schema, fields, indexes, keys and probably everything, but the articles advice that I change the names in Access first, as it will be easier and the migration process will run more smoothly. I am not allowed to make changes on the original Access database, as it will remain functional until the publish of the 'renewed' project, and the data inside it is being used, so a mere copy of the file is a solution I am not particularly fond of, because I might loose new records. Also I cant predict the changes I would want to make in the development process (as I said I believe I would want/need to apply some additional changes later on when I find 'weaknesses' in my data design in the development process) so I find it to be a little half baked solution.
Conclusion
The options presented, the way I see them, are two:
Use the Upsize Wizard to migrate the access tables, then write a script that applies the changes I want to make. Then in the development process add any additional changes to the script. When ready to publish on the production server, reapply the migration with the wizard, run the changes script and pray everything is fine.
Get more involved with the SSMA tool and try producing an updated version of the tables with the migration process. (See how efficient the renaming is and decide whether to use copied file to rename and then find a way to migrate only new records or do it all in the SSMA). Then again write a script for the changes that occur in the development process and re-do and apply it all on the production server when ready and then pray everything is fine.
Option I have not yet seen, apply it and then pray everything is fine.
I have researched the matter for a couple of days now, and found a few more solutions that I do not believe are better by the mentioned. However I include the possibility of missing the 'big red X on the map', a practical and easy solution which seems like it was designed specifically for me (though I doubt that a little). Anyway, reducing all the madness that I have written so far to a few simple questions will look like:
Is anyone aware if my conclusions are correct? I am leaning towards option one as it is easier to accomplish.
Has anyone experienced/found a better way to do that, or just found some 'logic-leaps' in my writings as I am overthinking the entire thing a little and may be doing some obvious miscalculation.
Very sorry for asking a trivial question and one that includes decision making that may involve deeper understanding of my project and situation, yet I am working with rather sensitive data and would appreciate feedback, even if only to improve my confidence into the chosen approach.
There is one other tool/method you might want to consider that seems to cater to your specific needs more. This would be to use the data import/export tool that ships with sqlserver to do a complete copy of all data into a temporary location within sql server and then write custom queries to reorganize the names and other changes you want to make. Is a bit more work but you could use the end product as a seed method for your migrations ;) (if you are doing code first anyway)
I have some pretty complex reports to write. Some of them... I'm not sure how I could write an sql query for just one of the values, let alone stuff them in a single query.
Is it common to just pull a crap load of data and figure it all via code instead? Or should I try and find a way to make all the reports rely on sql?
I have a very rich domain model. In fact, parts of code can be expanded on to calculate exactly what they want. The actual logic is not all that difficult to write - and it's nicer to work my domain model than with SQL. With SQL, writing the business logic, refactoring it, testing it and putting it version control is a royal pain because it's separate from your actual code.
For example, one the statistics they want is the % of how much they improved, especially in relation to other people in the same class, the same school, and compared to other schools. This requires some pretty detailed analysis of how they performed in the past to their latest information, as well as doing a calculation for the groups you are comparing against as a whole. I can't even imagine what the sql query would even look like.
The thing is, this % improvement is not a column in the database - it involves a big calculation in of itself by analyzing all the live data in real-time. There is no way to cache this data in a column as doing this calculation for every row it's needed every time the student does something is CRAZY.
I'm a little afraid about pulling out hundreds upon hundreds of records to get these numbers though. I may have to pull out that many just to figure out 1 value for 1 user... and if they want a report for all the users on a single screen, it's going to basically take analyzing the entire database. And that's just 1 column of values of many columns that they want on the report!
Basically, the report they want is a massive performance hog no matter what method I choose to write it.
Anyway, I'd like to ask you what kind of solutions you've used to these kind of a problems.
Sometimes a report can be generated by a single query. Sometimes some procedural code has to be written. And sometimes, even though a single query CAN be used, it's much better/faster/clearer to write a bit of procedural code.
Case in point - another developer at work wrote a report that used a single query. That query was amazing - turned a table sideways, did some amazing summation stuff - and may well have piped the output through hyperspace - truly a work of art. I couldn't have even conceived of doing something like that and learned a lot just from readying through it. It's only problem was that it took 45 minutes to run and brought the system to its knees in the process. I loved that query...but in the end...I admit it - I killed it. ((sob!)) I dismembered it with a chainsaw while humming "Highway To Hell"! I...I wrote a little procedural code to cover my tracks and...nobody noticed. I'd like to say I was sorry, but...in the end the job ran in 30 seconds. Oh, sure, it's easy enough to say "But performance matters, y'know"...but...I loved that query... ((sniffle...)) Anybody seen my chainsaw..? >;->
The point of the above is "Make Things As Simple As You Can, But No Simpler". If you find yourself with a query that covers three pages (I loved that query, but...) maybe it's trying to tell you something. A much simpler query and some procedural code may take up about the same space, page-wise, but could possibly be much easier to understand and maintain.
Share and enjoy.
Sounds like a challenging task you have ahead of you. I don't know all the details, but I think I would go at it from several directions:
Prioritize: You should try to negotiate with the "customer" and prioritize functionality. Chances are not everything is equally useful for them.
Manage expectations: If they have unrealistic expectations then tell them so in a nice way.
IMHO SQL is good in many respects, but it's not a brilliant programming language. So I'd rather just do calculations in the application rather than in the database.
I think I'd go for some delay in the system .. perhaps by caching calculated results for some minutes before recalculating. This is with a mind towards performance.
The short answer: for analysing large quantities of data, a SQL database is probably the best tool around.
However, that does not mean you should analyse this straight off your production database. I suggest you look into Datawarehousing.
For a one-off report, I'll write the code to produce it in whatever I can best reason about it in.
For a report that'll be generated more than once, I'll check on who is going to be producing it the next time. I'll still write the code in whatever I can best reason about it in, but I might add something to make it more attractive to use to that other person.
People usually use a third party report writing system rather than writing SQL. As an application developer, if you're spending a lot of time writing complex reports, I would severely question your manager's actions in NOT buying an off-the-shelf solution and letting less-skilled people build their own reports using some GUI.
I had a (friendly but heated) argument with my lead developer the other day because our project has TSQL Scripts that I code directly into SQL files which I then run against the database. I find that when I do this, it's easy to work out the schema in advance without fiddly pointing and clicking and then there's no opportunity to forget to generate a script to put into source control as generating the script no longer becomes a chore you have to do after the fact, but is an implicit part of the process (and also leads to cleaner scripts without the extra crap that SQL Server Management Studio inserts into the scripts it generates).
My lead developer insists that having to manually script it out is a pain in the arse and that he absolutely refuses to write his scripts by hand when there are perfectly good tools to do it without coding. I've noticed that the copying of his changes into the actual scripts tends to get delayed a bit as a result though.
What are your thoughts on the pros and/or cons of doing it one way vs the other? Am I being too rigid/old-school in my sticking to hand coding schema scripts or is he being too reliant on third party tools and losing something in the process?
I always script stuff myself because the wizards sometimes don't script things in a way that I like it and will also give funky names to defaults
scripting things yourself is also good in case you get laid off and you have to go for an interview where they ask you to script DDL on the whiteboard
As I usually collaborate with a colleague during the schema design, I tend to design the schema using the GUI tools, as its easier to discuss it with a diagram of the tables in front of you. I then generate the scripts, being careful to select the exact options that I want to avoid having to make manual changes post-export.
I think a decision on the relative merits of the two approaches might take into account factors such as
the frequency of changes to the schema
the frequency with which changes need to be propagated to other schemas (test, user acceptance, production, clients * n, etc)
the degree to which the schema may vary across development branches
how well-known in advance your various changes can be scheduled
whether or not you can generate SQL "diff" scripts between schemas.
On balance, I tend to prefer to work with a script for each change (or "migration"). It lets me resequence change releases as priorities shift.
Just because you can create tables in a graphical tool doesn't necessarily mean you should.
I find its as quick to write a script as it is to use SQLMS. You still have to type names in SQLMS, and the time spent moving from keyboard and mouse could be used writing the proper script anyway.
The two of you are almost working with two sets of code. Consistency seems to be a key factor on these types of decisions. In your case, if you create a script, your boss uses the gui to add a field, how do you stay in sync? You can't use your script to rebuild the table without editing it (Chance for error.).
Maybe he should pull rank and force you to format your scripts the same way the GUI creates them - just kidding.
I think you should flip on it..........
I have several MS Access queries (in views and stored procedures) that I am converting to SQL Server 2000 (T-SQL). Due to Access's limitations regarding sub-queries, and or the limitations of the original developer, many views have been created that function only as sub-queries for other views.
I don't have a clear business requirements spec, except to 'do what the Access application does', and half a page of notes on reports/CSV extracts, but the Access application doesn't even do what I suspect is required properly.
I, therefore, have to take a bottom up approach, and 'copy' the Access DB to T-SQL, where I would normally have a better understanding of requirements and take a top down approach, creating new queries to satisfy well defined requirements.
Is there a method I can follow in doing this? Do I spread it all out and spend a few days 'grokking' it, or do I continue just copying the Access views and adopt an evolutionary approach to optimising the querying?
Work out what access does with the queries, and then use this knowledge to check that you've transferred it properly. Only once you've done this can you think about refactoring. I'd start with slow queries and then go from there: work out what indexes you need and then progressively rewrite. This way you can deliver as soon as you've proved that you moved everything successfully (even if it is potentially a bit slower). That's much better than not being able to deliver at all because problem X came along.
I'd probably start with the Access database, exercise the queries in situ and see what the resultset is. Often you can understand what the query accomplishes and then work back to your own design to accomplish it. (To be thorough, you'll need to understand the intent pretty completely anyway.) And that sounds like the best statement of requirements you're going to get - "Just like it's implemented now."
Other than that, You're approach is the best I can think of. Once they are in SQL Server, just start testing and grokking.
When you are dealing with a problem like this it's often helpful to keep things working as they are while you make incremental changes. This is better from a risk management perspective.
I'd concentrate on getting it working, then checking the database performance and optimizing performance problems. Then, as you add features and fix bugs, clean up the code that's hard to maintain. As you said, a sub-query is really very similar to a view. So if it's not broken you may not need to change it.
This depends on your timeline. If you have to get the project running absolutely as soon as possible (I know this is true for EVERY project, but if it's REALLY true for you), then yes, duplicate the functionality and infrastructure from Access then do your refactoring either later or as you go.
If you have SOME time you can dedicate to it, then refactoring it now will give you two things:
You'll be happier with the code, and it will (likely) perform better, since actual analysis was done rather than the transcoding equivalent of a copy-paste
You'll likely gain a greater understanding of what the true business rules are, since you'll almost certainly come across things that aren't in the spec (especially considering how you describe them)
I would recommend copying the views to SQL Server immediately, and then use its sophisticated tools to help you grok them.
For example, SQL Server can tell you what views, stored procedures, etc, rely on a particular view, so you can see from there whether the view is a one-of or if it's actually used in more than one place. It will help you determine which views are more important than which.