How do I disable "activities" for a custom module on SugarCRM 6.5+ 7+ - module

The title is pretty clear but again: How do I disable "activities" for a custom module on SugarCRM 6.5+ 7+
I have a module containing millions of records and activities has been slowing it down to a breaking point. I managed to stop the activities through some hacking (deleting entries from the cache folder) but I would like to know how to do it the right way so on repair&rebuild + etc things will be normal/ok.
//edit1:
I'm happy to completely disable activities for a limited period of time while my script runs and then enable it again right after if that is possible.

Well, I figured out how to disable activities (activity stream, known in the past as sugar feed I think).
As my problem was running a script on 100k records etc disabling the whole activity stream temporarily in the beginning of the script and then turning it back on in the end was sufficient.
It's quite simple and it feels like an embarrassment I didn't look into the activity stream's source before since in order to disable it a simple:
Activity::disable();
does the job and to turn it back on:
Activity::enable();
There is also a "blacklist" array in the source etc but 1- It didn't solve the problem and 2- It's clearly not upgrade safe etc.

Related

Youtrack Search query for issues viewed since last update

In Youtrack 6.5, is it possible to generate a search query which lists all issues that have been updated since the user have viewed them?
Like -read and #Unresolved (unfortunately this only displays issues that have been read at least one time)
The reason behind this is: It's quite difficult to determine which tasks you need to answer to or take care of. Especially in a team having updates on tasks during the day.
Or is there another way to manage such "unseen changes" in issues (maybe just for those you are watching on)? Or is there something similar to an inbox (except for notifications via eMail)?
No, it's not supported. Have a look at https://youtrack.jetbrains.com/issue/JT-19610

Exchanging work before accurev promote

My colleague and I are participating in a huge project located in Accurev. We've already created own workspaces backed with some stream (let's call it zzz-stream) which is used by many other participants, not only by us.
The point is that we want to exchange our work between our workspaces, make some changes, exchange again, etc. BEFORE making the changes accessible for others, i.e. in other words we don't want to propagate our changes until it is stable and tested, but we want be able to work on it together.
My idea was to create new stream (yyy-stream) backed with zzz-stream, and then change our workspaces to be backed with yyy-stream. But unfortunately I have no rights to create streams.
My second idea was to use a workspace as backed stream, but it doesn't work because Accurev can't use ws as backed stream.
Is there any solution for our problem?
UPD: I accepted Brad's answer as most detailed. However Accurev is too heavy and sluggish to be used effectively. So actually I prefer to use Git for internal needs over the accurev workspace. (see Accurev externally, git internally)
Your idea of creating the yyy-stream is the EXACT right way to do it. The other options are decent workarounds for one-off situations, but creating the extra stream is simple and is fully leveraging AccuRev's capabilities.
That being said, I understand that your admins have stream creation locked down. They of course want control, but should be allowing for maximizing developer productivity and not forcing workarounds like this. My guess is they have stream creation locked down to a particular group being enforced by the server-admin trigger. One common thing I have seen other large sites do is:
- allow streams to be freely created off of a list of acceptable streams (easy to do in the trigger)
- enforce naming rules on the stream creation. This is important to admins in large sites to keep things organized. Again, this is very easy to enforce via the server-admin trigger.
Bottom line, if this is a common situation, work with the admins to allow this capability as per the above. If they have any questions, they are more than welcome to contact AccuRev and we will help them out.
Your idea on using another stream for you and your peer is a good one and is commonly called a collaboration stream. If your site has stream creation locked down, you would need to work with your AccuRev administrator to make that happen.
Another option is for you and the other developer to pull the keeps from the other workspace into your own stream. This relies on both of you being diligent about doing keeps and then you can look at the history of the other developer's workspace to find the keep operation, right-click that transaction and then select Send to Workspace. The destination workspace must be your own.
A third option (more for a situation where you are in your workspace and know exactly what file you want to grab the other users changes)is to bring up the version browser for the file, right click and select history/browse versions. Look for the other workspace, highlight the version in that workspace, right click and select send to workspace. This will checkout that version into your workspace.
This is similar to the change palette suggestion but quicker if your looking to this on a file basis.
Another idea is to use different version control system (e.g. git or svn) over Accurev workspace to exchange the changes and keep our history separated from zzz-stream. (similar to Accurev externally, git internally) Only changed files should be added to other VCS, not whole project. Some merge problems occur though.

Undo an Update in AccuRev?

How do I undo an update in Accurev? I want to revert to a state where contents of the files are exactly how it was before an "update" operation?
There are numerous ways to change the contents in your workspace to reflect an earlier configuration. Based on the limited description where you reference "all the files under CM", I'll make the assumption that you want to roll back your entire workspace as opposed to a select few files.
Question: does everyone parented by the same stream as your workspace want to roll back, or just you? If it's everyone, you can change the time basis of that parent stream to reflect the specific point in time you want to revert to. Once that is done, run Update, and you're good. If it's just you and it's more than a small sampling of files, I'd suggest creating a personal time-based stream, setting the time to when you want, and re-parenting your workspace to it:
Current_parent -- New_personal_time_stream -- your_workspace
There are other options as well if you just want to deal with a few select files, but it seems like this is what you're after...
Cheers,
~James

How can I speed up batch processing job in Coldfusion?

Every once in awhile I am fed a large data file that my client uploads and that needs to be processed through CMFL. The problem is that if I put the processing on a CF page, then it runs into a timeout issue after 120 seconds. I was able to move the processing code to a CFC where it seems to not have the timeout issue. However, sometime during the processing, it causes ColdFusion to crash and has to restarted. There are a number of database queries (5 or more, mixture of updates and selects) required for each line (8,000+) of the file I go through as well as other logic provided by me in the form of CFML.
My question is what would be the best way to go through this file. One caveat, I am not able to move the file to the database server and process it entirely with the DB. However, would it be more efficient to pass each line to a stored procedure that took care of everything? It would still be a lot of calls to the database, but nothing compared to what I have now. Also, what would be the best way to provide feedback to the user about how much of the file has been processed?
Edit:
I'm running CF 6.1
I just did a similar thing and use CF often for data parsing.
1) Maintain a file upload table (Parent table). For every file you upload you should be able to keep a list of each file and what status it is in (uploaded, processed, unprocessed)
2) Temp table to store all the rows of the data file. (child table) Import the entire data file into a temporary table. Attempting to do it all in memory will inevitably lead to some errors. Each row in this table will link to a file upload table entry above.
3) Maintain a processing status - For each row of the datafile you bring in, set a "process/unprocessed" tag. This way if it breaks, you can start from where you left off. As you run through each line, set it to be "processed".
4) Transaction - use cftransaction if possible to commit all of it at once, or at least one line at a time (with your 5 queries). That way if something goes boom, you don't have one row of data that is half computed/processed/updated/tested.
5) Once you're done processing, set the file name entry in the table in step 1 to be "processed"
By using the approach above, if something fails, you can set it to start where it left off, or at least have a clearer path of where to start investigating, or worst case clean up in your data. You will have a clear way of displaying to the user the status of the current upload processing, where it's at, and where it left off if there was an error.
If you have any questions, let me know.
Other thoughts:
You can increase timeouts, give the VM more memory, put it in 64 bit but all of those will only increase the capacity of your system so much. It's a good idea to do these per call and do it in conjunction with the above.
Java has some neat file processing libraries that are available as CFCS. if you run into a lot of issues with speed, you can use one of those to read it into a variable and then into the database
If you are playing with XML, do not use coldfusion's xml parsing. It works well for smaller files and has fits when things get bigger. There are several cfc's written out there (check riaforge, etc) that wrap some excellent java libraries for parsing xml data. You can then create a cfquery manually if need be with this data.
It's hard to tell without more info, but from what you have said I shoot out three ideas.
The first thing, is with so many database operations, it's possible that you are generating too much debugging. Make sure that under Debug Output settings in the administrator that the following settings are turned off.
Enable Robust Exception Information
Enable AJAX Debug Log Window
Request Debugging Output
The second thing I would do is look at those DB queries and make sure they are optimized. Make sure selects are happening with indicies, etc.
The third thing I would suspect is that the file hanging out in memory is probably suboptimal.
I would try looping through the file using file looping:
<cfloop file="#VARIABLES.filePath#" index="VARIABLES.line">
<!--- Code to go here --->
</cfloop>
Have you tried an event gateway? I believe those threads are not subject to the same timeout settings as page request threads.
SQL Server Integration Services (SSIS) is the recommended tool for complex ETL (Extract, Transform, and Load) work, which is what this sounds like. (It can be configured to access files on other servers.) The question might be, can you work up an interface between Cold Fusion and SSIS?
If you can upgrade to cf8 and take advantage of cfloop file="" which would give you greater speed and the file would not be put in memory (which is probably the cause of the crashing).
Depending on the situation you are encountering you could also use cfthread to speed up processing.
Currently, an event gateway is the only way to get around the timeout limits of an HTTP request cycle. CF does not have a way to process CF pages offline, that is, there is no command-line invocation (one of my biggest gripes about CF - very little offling processing).
Your best bet is to use an Event Gateway or rewrite your parsing logic in straight Java.
I had to do the same thing, Ben Nadel has written a bunch of great articles uses java file io, to allow you to more speedily read files, write files etc...
Really helped improve the performance of our csv importing application.

App launch sequencer

Every morning when I get into work I launch about a dozen apps and whatnot (FF, TB, VSx2-3, Eclipse, SSH, SVN update x2-3). Needles to say this does a good job of warming up my HDD for the day. I rather suspect that it would run a lot faster if they were launched sequentially (not to mention that I wouldn't need to click in 17 different places).
Is there a preexisting product that can kick off a sequence of tasks/apps/etc. where each task is only started after the last app is done hammering the HDD?
It would nerd to be able to kick apps like VS and firefox and also be able to trigger explorer context menu items like SVN update in TortoiseSVN.
Try SlickRun, it's free, I've used it for years, I use it constantly and I'd be lost without it.
Think of it like a configurable Start->Run command, it'll do what you want (you can configure n second pauses between multiple commands), and if you install it you'll use it for a thousand different things before the first week is out.
P.S. I have no stake in SlickRun, I just like it :)
Unfortunately, I don't know of any software that can do this for you automatically.
However, can't you trigger the updates through a console SVN task? If so, can't this be done by creating a batch file? It's low tech, and you might want to add a few pauses between each task, but it should do what you want.
As you mention TortoiseSVN, I'll assume your O/S is windows.
You could launch an Autohotkey script at startup. I don't think it can easily detect HDD activity, but you can at least wait until each window appears with the WinWaitActive command.
If each application has an average time they take to complete, you could simply use Windows' Scheduled Tasks application. Obviously you'll need to be running Windows but Scheduled Tasks can be found in the Control Panel.
Execute "Add Schedules Task", select the program, the frequency and then the specific time.