paladin and anyone else hit same target and causes server crash - crash

Playing on a linux hosted AzerothCore rev. be423a91b535 master branch.
Whenever as a paladin I hit a target with my abilities and then someone else swings at it the entire server crashes. The only error that shows up in log is this.
We've tested this with multiple characters on multiple accounts and and in all instances after casting either of my judgement of light, or judgement of mana causes the server to crash completely when another character swings at said target.
It does not happen if I do not use these abilities and just attach the target at the same time with another player.
This is a clean server recently set up, no additional modules or changes beyond lowering guild signing size.
error image
I've also gotten a snapshot of the server config if that has useful information.
Server config

Apparently there was a module added QAston Proc, and this causes world crashes. If anyone else has this issue updating should clear it up as the commit was removed.

Related

Compiled Access Program Runs Fine on 7 Computers but Crashes on 3 others

I have written a rather complex application in Microsoft Access. It is split into front end and back end files. To protect my code, I have compiled it and saved it as a runtime .accde file, which I then changed to an .accdr file to ensure it operated as a runtime. I have created two versions of the application: one for those with 32-bit Office installed and one for those with 64-bit office. I have used Inno Setup to package the application, the data file, and other files such as the icon file, the license file, etc., into an installable package, which works just fine.
Among my team of 27 beta testers of this application, so far 6 have downloaded it, and I have tested it on four of my own computers. On seven of these computers, the installation works perfectly and the application runs with no problems.
On the computers of three of my testers, when they try to run it, they get this error message:
The expression On Open you entered as the event property setting produced the following error: Bad file name or number.
* The expression may not result in the name of a macro, the name of a user-defined function, or [Event Procedure].
I'm pretty sure I know where the code is that's causing the problem, but cannot for the life of me figure out why the application crashes on those 2 computers but not on others.
The On Open event I suspect of causing the problem checks the linked tables, gets their connect string, then looks at the path for that string for the back end database. If it does not find it there, the procedure pops up a file selector dialog and instructs the user to find the data file, then it relinks all the tables.
If anyone could point me in the right direction to fixing this problem, I would be extremely grateful.
This is typically caused by a reference labelled as MISSING.
You have two (three) options:
Run the application on the offending machines with a full version of Access that lets you debug the code
Create a small test application that lists and verifies the references you use, and run this on the offending machines
Remove those two customers
Thanks to all the contributors here. Because of these folks and additional online research, the latest answer I can find is this:
This error occurs on a small percentage of computers on which the app is installed, and no one has yet figured out why, what causes it, or how to fix it. The workaround is to install the 2013 version of the Access runtime, as later versions will still cause the problem.
At least one of the offending computers is running the Click-to-Run version of Office. Still gathering information, but that's the status as of now.

inittab respawn of Node.js too fast

So I am trying to keep my Node server on a embedded computer running when it is out in the field. This lead me to leveraging inittab's respawn action. Here is the file I added to inittab:
node:5:respawn:node /path/to/node/files &
I know for a fact that when I startup this node application from command line, it does not get to the bottom of the main body and console.log "done" until a good 2-3 seconds after I issue the command.
So I feel like in that 2-3 second window the OS just keeps firing off respawns of the node app. I see in the error logs too in fact that the kernel ends up killing off a bunch of node processes because its running out of memory and stuff... plus I do get the 'node' process respawning too fast will suspend for 5 minutes message too.
I tried wrapping this in a script, dint work. I know I can use crontab but thats every minute... am I doing something wrong? or should I have a different approach all together?
Any and all advice is welcome!
TIA
Surely too late for you, but in case someone else finds such a problem: try removing the & from the command invocation.
What happens is that when the command goes to the background (thanks to the &), the parent (init) sees that it exited, and respawns it. Result: a storm of new instantations of your command.
Worse, you mention embedded, so I guess you are using busybox, whose init won't rate-limit the respawning - as would other implementations. So the respawning will only end when the system is out of memory.
inittab is overkill for this. I found out what I need is a process monitor. I found one that is lightweight and effective; it has some good reports of working great out in the field. http://en.wikipedia.org/wiki/Process_control_daemon
Using this would entail configuring this daemon to start and monitor your Node.js application for you.
That is a solution that works from the OS side.
Another way to do it is as follows. So if you are trying to keep Node.js running like I was, there are several modules written meant to keep other Node.js apps running. To mention a couple there are forever and respawn. I chose to use respawn.
This method entails starting one app written in Node.js that uses the respawn module to start and monitor the actual Node.js app you were interested in keeping running anyway.
Of course the downside of this is that if the Node.js engine (V8) goes down altogether then both your monitoring and monitored process will go down with it :-(. But its better than nothing!
PCD would be the ideal option. It would go down probably only if the OS goes down, and if the OS goes down then hope fully one has a watchdog in place to reboot the device/hardware.
Niko

xperfview on a different computer

Most use cases I've seen with xperf involve using xperfview on the same computer. A remote record and play back for me don't seem to work well. Symbols are not resolved correctly. Is there a known issue with remote record and local play with xperf/xperfview?
Why do you try remote connection? if you use xperf -d to stop logging the ETL contains all metadata, so that the symbols can be loaded from any PC you want. Copy it from PC A to PC B and view the ETL there.
Now that the 8.1 version of WPT is out, the recommended way to record traces is not with xperf.exe but with wprui.exe. This makes trace recording much simpler and much less error prone. See this blog post for details:
http://randomascii.wordpress.com/2013/04/20/xperf-basics-recording-a-trace-the-easy-way/
And yes, you absolutely should be able to record traces on one machine and view them on another.

How to force a version conflict in iCloud

I have a working implementation of iCloud. Now I want to improve conflict handling by adding some merge functionality. I've been trying to come up with a consistent way of forcing a conflict for testing purposes but I haven't had luck so far, conflicts don't occur consistently when I expect them to occur. This might indicate that I'm doing something wrong, or maybe that I just misunderstood something about how iCloud works (yet another thing, I mean).
I'm using UIDocument and yes, I'm listening to the UIDocumentStateChangedNotification. In fact, I do get some occasional conflict notifications. Also, I only have one file in iCloud.
Having two devices using the same iCloud accout, here is the flow of events I was expecting to always cause a conflict:
Open the file on both devices (both devices are now correctly seeing the same content). Note: Here is the only time openWithCompletionHandler is called, after this it's never called again.
Make some change on device A and call saveToURL.
Wait some time to allow the change to propagate.
Make some other change on device B and call saveToURL.
Wait some time to allow the change to propagate.
EXPECTED: The app should be getting a conflict notification from iCloud. OBSERVED: A conflict does occur very occasionally, but most of the time what happens is simply that the UIDocument gets its UIDocumentStateEditingDisabled flag set and then cleared back after half a second or so (I'd guess editing is being disabled while the iCloud daemon is pulling the version from the other device and saving it in the local ubiquitous directory).
Much like a version control system like SVN, I was expecting the version from device B to cause a conflict because an "update" was required in order to get the version uploaded by device A.
Am I wrong expecting a conflict in the scenario I just described? Why? Is there any other way to consistently force a conflict?
Thanks!
I would have thought a better way to cause a conflict would be to:
Make sure both devices have an up-to-date copy of the data
Put both devices into Airplane Mode to prevent any iCloud updates
Change the data in the same place on both devices, each with different new data
Turn the network back on
Wait for the changes to propagate
From the docs:
Conflicts occur when two instances of an app change a file locally and both changes are then transferred to iCloud. For example, this can happen when the changes are made while the device is in Airplane mode and cannot transmit changes to iCloud right away. When it does happen, iCloud stores both versions of the file and notifies the apps’ file presenters that a conflict has occurred and needs to be resolved.
The way you are doing it (allowing time for sync, altering the documents differently) seems like it shouldn't cause a conflict.
iCloud works basically the same as version control system - except that you can only access the conflict versions (when conflict happens).
When a device pulled ver_1 from iCloud, edit, save, and find the server has a different version (ver_2 or newer) than it expected, a conflicted version will be created.
After the initial sync, you can:
turn off wifi on device B, edit & save.
edit on device A, save.
turn on wifi on device B.
A conflict will come soon.

Best IT/back-office system hacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lots of people have things that their systems do for them or for their teams. Source control post-commit hooks are a standard example: have an automated build system that checks out the latest source, compiles, tests, and packages it is a back-office hack that most of us probably use.
What other cool things have you done?
We had one developer in our team who wasn't familiar with the concept of a subversion conflict. He deduced that if he simply deleted all that weird stuff in his code and clicked resolve that everything was ok (i.e. knocking out all the other changes in the file....)
Regardless to say, after the 5th time this occurred, and the 5th time that I had to explain why that defect that I just closed was reoccuring, I wrote a script.
It would diff for the changes to a file to see whether the consecutive checkin deleted all the previous changes and that they were done by the nameless developer.
It would then send an email to the boss with a description of what happened, and how much work was lost during the checkin.
There was no 7th occurrence.
We have a traffic-light that shows whether our daily build succeeds, has failed tests or simply doesn't build.
Also, we have a light bar that lights up for a few seconds whenever we receive an upload from a customer.
We aren't staffed 24x7 but we have critical processes that run throughout the night. We created an in-house alerts system to notify us of serious system issues, failed mission-critical processes, etc. It uses text-to-speech to create a descriptive message and then connects to our automated dialer to call the appropriate people with the message.
Working at a web design company I configured our dev server so we could see a working copy of a project in real time by a sub domain name. So if your name was joe and you were working on project jetfuel you would go to joe.jetfuel.test-example.com and you could see your changes instantly without committing.
This was a simple hack that used sub domain names as a partial directory structure. Our htdocs path looked like this htdocs/tag/project. We had a script (a php app that you would access by setup.test-example.com) that would create a new tag name for you and checkout whatever version you wanted and call the deploy script for that project. If it succeeded it would forward you to the new sub domain. You could then work on this new copy by a samba share.
This worked really well for us since we always deployed to the same linux build and our projects had simple database requirements.
Our original reason for doing this was because our developers worked on all kinds of different platforms. Besides fixing this platform problem this was awesome for viewing changes and testing. We had all kinds of tags ranging from peoples names, trunk versions, test tags, all the way to prototypes like jquery-menu-hack.jetfuel.test-example.com
Now that I look back I wonder how much easier it would have been to run virtual machines.
We had a dev working on a classic ASP site that didn't believe in source control. The code went from his machine straight to the production box. This lead to issues with lost changes or the inability to revert back to a stable version. Since CruiseControl.Net has the ability to monitor a directory, I added a project that actually checked in files whenever they were copied to production. Completely backward from CC.Net's original intent, but we didn't lose any more code.
Put in a pre-commit hook that checks the bug comment refers to an open bug, assigned to the user doing the checkin. (SCMBug can do this).
Then to make life REALLY interesting, spell check the comments!!
The commit comment, and the one in the code. (spell is my buddy)
Run the code through a code formatter set to compayn standard; and diff it to the original: if it's not in company offical format: reject the commit.
Do a coverage test with the unit test build.
Email all mistakes/errors caused to the development team.
I left OUT the name of the developer. They know they did it.
Not exactly hacks, but a couple of must-haves for IT dev work:
If you're using subversion, you've got to use CommitMonitor. (http://tools.tortoisesvn.net/CommitMonitor) It lets you monitor svn repositories for new commits & then review the new commits. Great if you're wanting to stay on top of what your team is doing. Particularly if you have a couple of juniors that need to be watched. ;)
Rsnapshot (http://www.rsnapshot.org/) is also invaluable - we have complete backup snapshots of our entire filesystem every four hours going back 2 years, and every day beyond that. It's like a data cube for your filesystem! The peace of mind this gives is pure bliss. :)
Hardly a hack, but back in the day, on our speedy VAX 11/730, our overnight process would print the file "BLAMMO.TXT" on the printer if something went amiss. Every morning, the first stop was the printer when coming in.
Back in the dotCom days about 9 years ago, I had to hack a failover system between two different locations. We had a funky setup with a powerbuilder front end website, and powerbuilder managment tool. Data was stored in MSSQL 7.0. The webservers used IPX to communicate to the SQL Servers (don't ask). Anyway, I was responsbile for coming up with a failover plan.
I ended up hacking together some linux boxes, and had them run our external DNS. One at each location. We had a remote site w/ webserver, and sql server I got SQL transaction replication working over a 128k ISDN IPX connection (of all things). Then built a monitoring tool at our production site to send packets out to various upstream network handoffs. If we experienced more than 20% outage the primary site, the monitoring tool ran a perl script on the Debian box to change DNS and point to our 2ndary. Our secondary had a heartbeat w/ our primary DNS, and monitoring station. It would duplicate records unless it lost both connections then it would roll over to pointing DNS to backup location.
The primary site would shut down the SQL server at the primary location to break replication. Automated site to site failover using 128k ISDN IPX connection :)
Back at my previous job, we had to audit many tables for data changes (inserts, updates and deletes). Our support crew had to be able to search through this data to find changes that users made.
The temporary solution that had become semi-permanent was to store each non-select query. However this was a large system, that the table would grow by about 1.5GB a day.
The solution I came up with was to create a script that for all tables in an external list, created the appropriate triggers that audit each table, row, column, before and after, when and by whom and store it in our new audit table. This table grew by about 10% the size of the older version and stored much more usable data. It enabled us to create a UI to search and view every change made to our data, without requiring any knowledge of SQL for our support team or business users.
This is at a lesser level, but I am fairly proud of a make file I wrote for compiling code for my research. It only needs to be given your source and header file names that can take care of the rest all by itself (though it does make the one assumption that you will not be compiling any header files into objects, only source files get compiled). The other downsides are the fact that it relies on the GNU make program's second expansion feature, so I don't know if it works on other make programs. Additionally the compiler used needs to support something similar to gcc's -MM feature. Here is hoping that no one laughs at it.
-include prereqs.mk
HEADERS=$(SRC_DIR)/gs_lib.h $(SRC_DIR)/gs_structs.h
SOURCES=$(SRC_DIR)/main.cpp $(SRC_DIR)/gs_lib.cpp
OBJECTS=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,$(SOURCES))
release: FLAGS=$(GEN_FLAGS)$(OPT_FLAGS)
release: $(OBJECTS) prereqs.mk
$(CXX) $(FLAGS) $(LINKER_FLAGS) $(OUTPUT_FLAG) $(EXECUTABLE) $(OBJECTS)
prereqs.mk: $(SOURCES) $(HEADERS)
$(CXX) $(DIR_FLAGS) $(MAKE_FLAG) $(SOURCES) | sed 's,\([abcdefghijklmnopqrstuvwxyz_]*\).o:,\1= \\\n,' > $#
.SECONDEXPANSION:
$(OBJECTS): $$($$(patsubst $(OBJ_DIR)/%.o,%,$$#))
$(CXX) $(FLAGS) $(NO_LINK_FLAG) $(OUTPUT_FLAG) $# $(patsubst $(OBJ_DIR)/%.o,$(SRC_DIR)/%.cpp,$#)
Obviously I dropped the definition of a number of variables, but I think it gets the idea across.
Since my coding tools and style are compatible with the requirements of this script I like to use it. All I need to do to add (a) new piece(s) of source code is add its name(s) to the appropriate variable and the rest is taken care of.
We have Twitter accounts for many projects which tweet things like commit messages, notices from builds, failed unit tests, deployments, bug tracking activity - any kind of event associated with the project. Running a client like Twitter Gwibber (which displays a pop-up for each new status) is a great way to stay in touch with the activity on the projects you are interested. Using Twitter is good as you can take advantage of all the 3rd party apps - such as the iPhone clients.
Add commit-hook check for VRML/3d-model files with absolute path to textures/images. f:/maya/my-textures/newproject/xxxx.png just doesn't belong on the server.
Back in the 1993, when source control systems were really expensive and unwieldy, the company I worked about had an in-house source control built as 4DOS scripts. It wasn't as sofisticated as most current source control systems, for example it didn't have branching or integrates, but it did the basic job of supporting revisions history, checkout/checkin and rudimentary conflict resolution.