I have little knowledge of redis or how it works. Another app wrote this data but they are clueless on how to fix my problem. So please be specific.
The app had the option for redis to set a prefix. I decided to use 'abc'. I turns out now that I should not have used anything for the prefix and just used an empty string ''. I have 5 years of data in the redis files I'd rather not lose.
How can I reset the prefix from abc to ''.
Thanks,
Related
I wrote a small c# service importing tarcking numbers to a single UDF separated by a , (comma). The problem is that occasionally (maybe every 200th document) a comma is saved as a semi-colon. A kind of similar issue I have is with the Amazon importer where I add a comment. Maybe with the same frequency, the comment has a whitespace between every single original characters. All in common is that the error cannot be within my code. There is no difference between the correct documents (ca. 95%) and the others.
Does anybody have an idea how i can workaround that these issues don't appear anymore?
Or why this can happen?
I know I have an outdated SAP B1 at version 9.2 PL 10 Hotfix3. DI-API is linked to the install folder. Is this issue fixed in any later version?
(Current workaround is a cron job checking for wrong entries in the db and update those documents. Very uncool)
Definitely sounds like a DI-API bug. If you posted your code it would help confirm this.
Assuming it IS a DI-API bug, I would "dark side" it and just do a regular SQL update (bypass the DI-API), since it's just a UDF and there's probably not any business logic you need SAP to perform on these updates.
Alternatively, you could normalize your data and create a separate table linked via FK to your current table to house a single UDF per row (therefore not having to deal with the weird coma character issue).
As a third alternative, you could make of the SBO Post-Transaction Notification SP to monitor for your error case and perform the "fix" there, intead of in your cron-job.
Disclaimer: I have not worked with SAP in 4+ years.
I have an internal Apache server for testing purpose, not client facing.
I wanted to upgrade the server to apache 2.4, but there is no space left, so I was trying to delete some files on the server.
After checking file size, I found a folder /var/lib/elasticsearch takes 80g space. For example, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-2015.12.08 takes 60g already. I'm not sure what's elasticsearch. Is it safe if i delete this logstash? Thanks!
Elasticsearch is a search engine, like a NoSql database, and it stores the data in indeces. What you are seeing is the data of one index.
Probobly someone was using the index aroung 2015 when the index was timestamped.
I would just delete it.
I'm afraid that only you can answer that question. One use for logstash+elastic search are to help make sense out of system logs. That combination isn't normally setup by default, so I presume someone set it up at some time for some reason, and it has obviously done some logging. Only you can know if it is still being used, or if it is safe to delete.
As other answers pointed out Elastic search is a distributed search engine. And I believe an earlier user was pushing application or system logs using Logstash to this Elastic search instance. If you can find the source application, check if the log files are already there, if yes, then you can go ahead and delete your index. I highly doubt anyone still needs the logs back from 2015, but it is really your call to see what your application's archiving requirements are and then take necessary action.
I'm giving the flag -Denable-debug-rules, which the documentation says should print something to a log at least every 5 minutes, according to http://graphdb.ontotext.com/documentation/standard/rules-optimisations.html
Unfortunately it's not, and I need to figure out why inferencing is taking so long.
Help?
The specific files is http://purl.obolibrary.org/obo/pr.owl and I'm using owl2-rl-optimized
Version graphdb-ee-6.3.1
An exchange with GraphDB tech support clarified that the built-in rule sets can not be monitored. To effectively monitor them, copy into a new file and add that file as a ruleset following http://graphdb.ontotext.com/documentation/enterprise/reasoning.html#operations-on-rulesets
I am new to this so forgive me for not understanding the lingo.
I have been using rackspace cloud control panel to build multiple virtual servers, i use them for maybe a couple of hours then i delete them. I need these servers to all have specific and unique names such as: "server1, server2, server3, etc." I also need them to have a specific password unlike the randomly generated password that is assigned by default.
I have been creating each individual server manually (based on an image that's set up) then I have to go back and reset the password andreboot all of them. Doing each one manually is a bit time consuming and I'm sure there is an easier way. Please help me figure this out.
I've been doing some searching but I haven't found anything too relevant to my problem on top of that I'm not too familiar with programming and such.
Basically what I'm looking to do is automatically create these servers with their appropriate names and passwords already built in from the start. I'm not sure if some sort of "API" is the answer, or if there's some sort of script that can be written, or both.
Any assistance is much appreciated.
thanks,
Chris
Im thinking about to use a DB for my logs instead of a normal txt file. Why? In a DB I could handle them much more easier than with a txt file. Actually I dont have a big log txt, there are some exceptions, and for every single day: userlogins and what client uploaded what file where - but even here, a DB would make sense or? What free (for noncommercial and for commercial) small DBs should I try? I could use a "real" DB like PostgreSQL or nosql with a simple XML DB with BaseX, so that's what I thought. Any suggestions? Thank you.
Edit: Oh sry forgot - Im using .NET, but maybe that's not so importan.
What will you do with your logging information? If you are going to do regular complex analysis work on it (performance, trending, etc.) then a database would be very useful. If you just need a place to dump "this happened" type messages that will be used infrequently at best (post-crash analysis and the like), a simple text or XML file should be more than sufficient. (If you do that, cycle the files ever day or week -- rename the current file, say with the date/time, and start a new "current" log file.)
Use SQLite. Really small footprint, cross-platform, single file for the whole db and serverless (http://www.sqlite.org) Give it a try.
Using the Package Manager you can install SqlServerCompact which works within your solution.
Use the Package Manager Console and type the following command:
Install-Package SqlServerCompact