How can I use IntelliJ IDEA to "group" data source connections into arbitrary categories? - intellij-idea

The place I'm currently working at has gone all in the micro-services approach. There are a hundred or so services, with many having their own database, each requiring a different data source with different connection details.
Then there's multiple environments (local development, CI, SIT, UAT, etc.)
I don't have connections for each one of those but I do have, so far, about 30 connections in my database tool window, spread across a few of those environments (mostly local and CI, but a few from other environments).
I'd like to start organising/grouping those connections (probably by environment, but a different categorisation might make more sense later) - I've more than once been jumping around between different databases and jumped into the DB for the right service, but in the wrong environment.
I've looked at the tool window buttons, and the right-click menu - but nothing there seems to be relevant to grouping the datasource nodes in the tool window.
My current workaround is to make sure each connection description starts with a prefix that identifies the environment - _local svc-blah, CI svc-blah, etc. IDEA sorts these alphabetically, so the connections for the same environment tend to stay together (note the underbar on the beginning of the local connections so they are at the top of the list). This works ok, but if IDEA has a grouping mechanism it would presumably have some expand/collapse functionality, which would help me out too.
The question: How can I use IntelliJ IDEA to "group" data source connections into arbitrary categories?

Sure, it can be grouped. Just select one and hit F6 - you will be prompted to enter group name and then you will see it like a 'folder' in tree, where you can drag'n'drop others.

Related

Exchanging work before accurev promote

My colleague and I are participating in a huge project located in Accurev. We've already created own workspaces backed with some stream (let's call it zzz-stream) which is used by many other participants, not only by us.
The point is that we want to exchange our work between our workspaces, make some changes, exchange again, etc. BEFORE making the changes accessible for others, i.e. in other words we don't want to propagate our changes until it is stable and tested, but we want be able to work on it together.
My idea was to create new stream (yyy-stream) backed with zzz-stream, and then change our workspaces to be backed with yyy-stream. But unfortunately I have no rights to create streams.
My second idea was to use a workspace as backed stream, but it doesn't work because Accurev can't use ws as backed stream.
Is there any solution for our problem?
UPD: I accepted Brad's answer as most detailed. However Accurev is too heavy and sluggish to be used effectively. So actually I prefer to use Git for internal needs over the accurev workspace. (see Accurev externally, git internally)
Your idea of creating the yyy-stream is the EXACT right way to do it. The other options are decent workarounds for one-off situations, but creating the extra stream is simple and is fully leveraging AccuRev's capabilities.
That being said, I understand that your admins have stream creation locked down. They of course want control, but should be allowing for maximizing developer productivity and not forcing workarounds like this. My guess is they have stream creation locked down to a particular group being enforced by the server-admin trigger. One common thing I have seen other large sites do is:
- allow streams to be freely created off of a list of acceptable streams (easy to do in the trigger)
- enforce naming rules on the stream creation. This is important to admins in large sites to keep things organized. Again, this is very easy to enforce via the server-admin trigger.
Bottom line, if this is a common situation, work with the admins to allow this capability as per the above. If they have any questions, they are more than welcome to contact AccuRev and we will help them out.
Your idea on using another stream for you and your peer is a good one and is commonly called a collaboration stream. If your site has stream creation locked down, you would need to work with your AccuRev administrator to make that happen.
Another option is for you and the other developer to pull the keeps from the other workspace into your own stream. This relies on both of you being diligent about doing keeps and then you can look at the history of the other developer's workspace to find the keep operation, right-click that transaction and then select Send to Workspace. The destination workspace must be your own.
A third option (more for a situation where you are in your workspace and know exactly what file you want to grab the other users changes)is to bring up the version browser for the file, right click and select history/browse versions. Look for the other workspace, highlight the version in that workspace, right click and select send to workspace. This will checkout that version into your workspace.
This is similar to the change palette suggestion but quicker if your looking to this on a file basis.
Another idea is to use different version control system (e.g. git or svn) over Accurev workspace to exchange the changes and keep our history separated from zzz-stream. (similar to Accurev externally, git internally) Only changed files should be added to other VCS, not whole project. Some merge problems occur though.

Application Scope settings or something else

I am in the process of building a completely fresh version of an application that has been in existence for a good many years. I can look back with horror now at some of the things I had done, but the whole point of life is to learn as we go along. The nice thing now is that I have a clean slate from which to work, and it's because of that that I thought that I would seek some advice from you all.
User settings are great for those things that each individual user would naturally want to and ought to be able to change, a theme or visual style for example. Application settings should quite obviously apply to the entire application irrespective of whoever uses it.
Somewhere in the middle though are a set of settings that I would like to give the system administrator the opportunity to change (default work periods, appointment time slots, the currency the company wants to use as its main trading one etc etc). These can't be user settings because individual users should not be able to change them, nor should they be application settings because I as the developer have no idea what the end user (or to be more exact the senior end user) would want to set them to.
Many years ago I might have considered writing such settings to the registry, or an ini file. I could perhaps (as this is an application that is tightly integrated with its own custom database) create a one off settings table, and read in the relevant settings at program startup. I could perhaps opt for a separate 'universal settings' xml configuration file stored in the all users directory. Clearly a number of options.
What I would like to try and establish though is the most efficient way to approach this. What is the best trade off between file read and write operations as against reading everything into a set of public constants at application start-up? These are not going to be settings that will only be referred to occasionally so efficiency is going to be key.
Just so that there is no ambiguity as to what the application will be. Traditional winforms, using vs 2012 as the development ide and vb.net as the code base based on .net4.5 and ef 5.0. Backend data to be stored in either sql express or full sql server. Target operating system for end users will be windows 7 or above (so due respect for the uac will be required).
I'd welcome any suggestions that you might have.

How to store configuration data so that to not copy it during database copy?

There are parameters that I would not want to be transferred from production environment to QA system. Staff like network path and url's. The problem is that in ABAP everything is in the database and when the database is copied to the QA system you have to manually change those parameters. And this is prone to errors.
Is there a way to store configuration information in a way that won't get transferred with the database?
Thanks.
In short: no - at least that would be very unusual in a SAP environment.
If your QA system is set up as a system copy of your production environment (which is the usual path), there are quite a few steps to do to make the system work correctly. This includes some configuration, which can be as simple as filepaths such as you mention, but also the addresses and names of "partner systems". For example, one of my customers is a bank, so when copying his production system, he makes triply sure that no activity on the QA side accidentally trickles to the production side. Some other changes are made as well, for example obscuring peoples names and addresses so no mail gets accidentally sent etc.
There are a few ways to make applying these changes as easy as possible (look for some SAP documentation or books on SAP Transport and Change management, I had one by Sue McFarland Metzger or so that was quite good). From what I've seen, there is usually a set of transports that change the configuration and customizing etc. on the QA system to the
appropriate values.
Hope that helps.
You cannot prevent the configuration stored in the database from being copied to the cloned instance. However, you can design the configuration storage in a way that will prevent the copied entries from being used. You should check with your basis administrators if they can guarantee that the cloned system will get a new system ID (SID). If this is the case, then you can simply use the SID as key field in your configuration table. After the system copy, the SID will be changed and the cloned system will no longer access the original entries.
your question is not clear, are you talking about standard or custom config ?
Greetings, assuming you are storing these paths in a Z table, then some shops put the sy-sysid ( system id ) as one of the columns. Maintain all systems in your dev and transport to production. This becomes painful after a while, so I would only suggest this for information that does not change a lot ( file paths might be good ).
T.

Concept of "Data Directory" on each platform

This is very similar to Where should cross-platform apps keep their data?, but expanding on it a bit.
There is some good advice on where the parent directory for data should be, but not so much on what a given app's directory should be.
For example, let's say we have a cross-platform application, written by My Corp, within My Brand, called My App. Assume there are other products in My Brand which presumably want their own data, and other brands in My Corp as well. Where should its data and/or configuration go on Windows? On Unix? Mac OS9? Mac OSX? Other?
e.g., on Windows, would the data go in "...\Application Data\My Corp\My Brand\My App", while on Mac OS X the data would go into ~/Library/Application Support/My Corp/My Brand/My App" and on Unix it would go into "~/.mycorp/mybrand/myapp"? (I would imagine other platforms would use the mangling of unix, even if the base directory may be different.)
If there is no real convention, does this seem reasonable? Any suggestions for Mac OS9?
Just to start the reflection:
You have to make a clear difference between:
application state data
settings
data (can be common for several applications)
The latter may eventually be in a database, or could be managed by one or several applications, or encapsulated by a communication bus in order to avoid all the other applications to dialog between themselves to access those data.
The data representing the state of an application can go in 'Application Data' as mentioned in the question "where cross-platform apps keep their data ?".
But the settings... It depends if you application need to be launched with several "configurations":
one for each platform: in case you must manage them in the development stage, and package that file in the release stage, in order to store it in 'Application Data'
many for one platform, like with different Heap sizes, or different settings representing different operations to be executed by your same app. And that leads to an explosion of setting files (also stored in various sub-directory of 'Application Data')
That is where the idea of abstracting those data into a Setting Provider is a good idea.
Actually, we have so many different configurations for our settings we store them into a database on a separate production machine. That way all the apps can access them, but more importantly, we can access and change them in real time, without having to stop / restart the applications for each modification, or without having to go to each deployment platform.

Best IT/back-office system hacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.