My BB has an "archives" forum. I want to write a mod that will make it so that, if a topic is in the archives, then it can be viewed only by users who joined before the topic was posted. Is this feasible?
Yes it is feasible. Using the Topics Only Visible to OP mod, you even have a fairly close approximation of what you need to happen functionality wise.
From this mod, a few things would need to change
The viewtopic.php instructions would need to account for post creation date and user creation date instead of being for the original poster
The viewforum.php instructions would need to account for first post creation and user creation date instead of being for the original poster
After look through the installation instructions and changes you'd need to make, it seems those are the two biggest changes required. The ACP changes appear to be more wording, and the variable names could probably be slightly more appropriate since your mod won't be about only the OP seeing the post.
Related
I was reading some interesting questions about the topic "Can we make a program that, given a particular sequence, produces the next terms", like this one, and I really like the detailed answer of this one. I understand that the answer is "That's impossible without more restrictions", and that given some restrictions (polynomials, rational function or boolean map) we know some good algorithms, as the second answer I linked explains.
Now, a natural question is how much can we solve, trying our best even if we can't always solve it, to answer the original, general question. What I usually do when facing a hard sequence is trying to see if it's in OEIS, and if it seems to be there, seeing if there is any formula or algorithm to produce it in there. You can download a small version of OEIS with the first terms of each sequence, and you can make queries to find formulas or maple algorithms for a particular sequence. My question is, do you think it's feasible to download a small version of OEIS that includes, with the first terms, a little algorithm to produce it?
The natural problem here is that I haven't seen any link to download the entire database of OEIS with all the details, which maybe deserves its own question. Even if we had this, you need to read the formulas/algorithms (that can be written in different languages, from what I've seen) and interpret them correctly. But I thought maybe someone here knows how to solve this, in any case thanks in advance.
You could, as you note, download the sequences and their A-numbers from the link mentioned here: https://oeis.org/wiki/Welcome#Compressed_Versions
After searching that and finding one sequence (or a small number of sequences) of interest, you could scrape the respective page(s) for formulas. There are specific fields for Maple and Mathematica, which may be helpful, and otherwise, an entry in the PROGRAM field should include identifying information when it is not one of the standard languages with its own field in the database. See: http://oeis.org/wiki/Style_Sheet
Unofficially, but with the interests of the OEIS in mind, I would not recommend trying to download or scrape the OEIS in its entirety. Whether it's one person, or a whole host of people, we would certainly recommend using the compressed version of the database to identify sequences of interest by A-number first, then pulling their entire entry by scraping the site or querying the OEIS using methods that you have already mentioned: Programmatic access to On-Line Encyclopedia of Integer Sequences
If this sounds laborious, perhaps an alternative is the Wolfram Cloud, which actives this through other means. For example, you can navigate to the cloud (you may have to register just to get access) at: https://www.wolframcloud.com/
Typing in something like FindSequenceFunction[{1, 2, 3, 5, 17, 305, 34865}] will give you a formula, if Wolfram/Mathematica can find one. The documentation for FindSequenceFunction can be found here: https://reference.wolfram.com/language/ref/FindSequenceFunction.html
Wolfram/Mathematica can also invoke the OEIS using packages like the one described here: https://mathematica.stackexchange.com/questions/40/is-it-possible-to-invoke-the-oeis-from-mathematica
I'm building a student schedule generator and I need a way of producing more than one solution. Is there some way to save off feasible scores or scores of Xhard/Ysoft?
I need to be able to output more than one potential schedule, that way the student will have a choice for one schedule over the other if for whatever reason they don't want the "best" schedule (maybe they don't like one of the professors, maybe they don't want an 8am class, whatever)
My original idea was to save off all feasible solutions using the bestSolutionChanged event listener. The problem with this, is that once it finds a 0hard/0soft score, it ignores all scores after that, including scores that are equal.
Ideally I'd like to save off all scores of 0hard/-3soft or better, but just being able to save any feasible scores or force optaplanner to look for a new best score would be useful as well.
This is not a solution, but an analysis of the problem:
Hacking the BestSolutionRecaller is obviously not just a big pain, it's also behaviour we don't want to encourage as it makes upgrading to newer version an even bigger pain. So don't expect us to solve this by adding an easy way to configure that in the solver config any time soon. That being said, a solution for this common problem is clearly needed.
When a new best solution is found, it is planning cloned (see docs for definition) from the working solution (the internal solution in OptaPlanner). This allow us to remember that new best solution as the working solution solution changes. That also means the BestSolutionChangedEvents gets a plannng clone and can safely ship it to another thread, for example to marshal it to a client (presuming any ProblemFactChanges you create do copies instead of alterations), without being corrupted by the solver thread that modifies the working solution.
New best solution imply that workingScore > bestScore. The moment it instead does workingScore >= bestScore, we need far more planning clones (which are a bit CPU expensive), but we could then just send out BestSolutionChangedEvents for that too, if and only if a flag is enabled of course, because most users (unlike yourself) don't want this behaviour.
One proposal is to create a separate BestSolutionChangedOrSameEvent, next to the BestSolutionChangedEvent. This might not be ideal, because we need to be able to detect whether or not someone needs those extra planning clones.
Another proposal is to just have a flag in the <solver> config that switches from > to >= behavior for BestSolutionChangedEvent.
Please create a jira (see "get help" on webpage) and link it it here, or create a support ticket (also see "get help" on webpage).
So I've the following domain model :
Article which is basically a blog post and currently an Entity.
Now, I'd like to add the following feature :
When an user view the article (in its browser), an api call is made to "flag" the blog post as being read.
Now, if I do some computation, I should be able to determine which articles haven't been read yet.
When an user post a comment to an article, an api call is made to "flag" the blog post as being followed.
Now, if I do some computation, I should be able to determine if there are some new posted comments since the latest user's comment post.
Basically, both feature (read & follow) share the attribute, an article id, an user id and a read/action date.
Note that, if an Article is followed, and then read, the read date should be used.
Therefore, I though I could use the same object and adding an extra attributes to mark it as followed.
Do you have any design ideas?
Note that are much articles & users, I'm using Doctrine2 and MySQL but this apply to any languages.
To ensure your application scales well, I'd do your computations locally when the events are triggered. I.e. someone adds a comment and it causes the system to check who has an investment in that new comment. Otherwise you end up with a scheduled task processing all the data, which will run fine at first, but will have an exponentially increasing workload as the relations between users, articles and comments increases.
You can also look into using the Map/Reduce pattern, Ayende has a good introduction article to this, which is almost in the same application domain as you describe (articles, comments, etc.).
As for the event of marking an article or comment as read by a particular user, this is something that is neither an article or user thing. If you were using a document database and wanted to store this data against a user, then it could build up quite a bit of data over time, I'd be more tempted to either store the data in a new entity or against the article (as in theory this will have an initial burst of interest and them dip in interest to a level representing it's popularity.
Hopefully some of that might help.
An interesting problem occured recently, and I've been thinking of the "best" way (for a given value of "best") to implement this.
In essence, it's one of tracking notes against source code. The example that flagged this was getting a problem fixed in live within SLAs, and how to best achieve this. Without going into all the details, it came down to finding a function that's used in a number of places which may or may not be buggy, yet the problem was being reporting only in a single location.
The fix to meet the SLAs was simply to add a check into the location where the problem was reported, rather than tweaking the common code and having to test everything that touches that function.
The interesting issue is then for upstreaming. The "correct" method would then be to go back and check the original function, validate it's correct for everywhere it's called and then make the change "properly" if its determined the library function is wrong.
The problem is this takes time, so upstreaming may simply take the workaround, etc. However if the problem occurs again (say six months later) in another location calling the same library function, there isn't an easy way to link the two problems together. You can search the bug tracking database, but this isn't guranteed to help - it depends if a note's been added saying something along the lines of "this library function needs more thorough checking, but no time to investigate now".
So the question is this: within a large team of developers (30 plus, split into teams of both support and on-going development), what methods do you use to manage (what are effectively) "sticky notes" against source code, short of adding a comment to the suspicious function's source code saying "this might be a bit dodgy"?
The problem with the commiting a comment is one of process: a change is a change, so committing a zero-change change (i.e., one where just comments are added) is not ideal; developers can make mistakes even adding a comment (hit a stray key or something) so it's always (IMO) better to commit only where actual changes are made.
Now a wiki could be used to track per-file notes, but we've got a minimum of four branches and inexcess of a few hundred files (SQL objects, source code, XML files, etc), so a wiki will get unmangable quite quickly.
This is the sort of thing that it would be nice if SCM's could support - bits of metadata against files that are simply notes, but don't add to the SCM's version history - that can be displayed when doing (say) an svn update, or manually viewed.
There may already be solutions out there -- so how do you manage this type of knowledge sharing?
Well we're now using this method: in each folder checked into SVN, we've created a .url shortcut (this is Windows we're dev'ing on) that links to a page on our development wiki about that folder. Thus we can update the Wiki info freely, and on checkout/update everyone gets a link that will take them to the appropriate Wiki page for that folder/module.
We've not long instigated it so we'll have to see how well it works long term -- but it's better than what we had before (i.e., nothing :-) ).
[I'm not asking about the architecture of SO, but it would be helpful to the question.]
On SO, when a user clicks on his/her name and clicks on "responses" they see other users responses to comment threads, questions, and answers in which they have participated. I've had the sneaking suspicion that I've missed certain responses out there, which made me wonder: if you had to build that thing, would you pull everything dynamically from the database every time a user requested it? Or would you modify it when there is new related activity in the application? Or would you build it in a nightly daemon process?
I imagine that the real answer is that it's dynamically constructed every time, but that the tables are denormalized in such a way so as to make the thing less time-consuming. How would you build it?
I'm asking about any platform, of course, not only on .Net.
I would pull it dynamically from the database every time. I think this gives you the best result from a user experience and then I would apply the principal that premature optimization is evil. Later if there were performance issues I would look into caching.
I think doing it as a daemon/push process would actually result in more overall work being done. That is the updates would happen more frequently than the users are requesting the info.
Obviously, when an answer or comment is posted, you'll want to identify the user that should be informed in their responses tab. Then just add a row to a responses table containing the response text, timestamp, and the user to which it belongs. That way you can dynamically generate the tab with a simple
select * from responses where user=<userid> order by time desc limit 30
or something like that.
p.s. Extra credit to anyone that can write a query that will remove old responses - assume that each person should have the last 30 responses in their responses tab.
I expect that userid would be a natural option for the clustered index. If you have an "Active" boolean field then you don't need to worry much about locks; the table could be write-only except to update the (unindexed) Active column. I bet it already works that way, since it appears that everything is recoverable.
Don't need no stinking extra-credit response remover.
I would assume this is denormalized in the database. The Comment table probably has both and answer_id and an answer_uid so the SQL to find comments on you answers just run against the comment table. The same setup would work on the Answer table. Each answer has a question_id and a question_uid.
Having said that, these are probably the same table and you have response_to_id and response_to_uid and that makes lots of code simpler and makes the "recent" tab a single select as well. In fact the difference between the two selects is one uses the uid and one uses the response_to_uid.
I'd say that your UI and your database should both be driven by your Application Domain; so they will reflect each other based on their common provenance there.
Some quick notes to illustrate, using simplified Object Role Modeling as discussed by Fowler et al.
Entities
Users
Questions
Answers
Comments
Entity Roles
(Note: In Object Role Modeling, most Roles are reflexive. Some, e.g. booleans here, are monopolar)
Question has User
Question has QuestionVersions
Question as Answers
Question has Comments
Answer has AnswerVersions
Answer has Comments
Question has User
QuestionVersion has Text
QuestionVersion has Timestamp
QuestionVersion has IsDeleted (could be inferred from nonNULL timestamp eg)
QuestionVersion has DeltedByUser
QuestionVersion has DeletedTimestamp
Answer has User
AnswerVersion has Text
AnswerVersion has Timestamp
AnswerVersion has IsDeleted
AnswerVersion has DeltedByUser
AnswerVersion has DeletedTimestamp
Comment has Text
Comment has User
Comment has Timestamp
Comment IsDeleted (boolean)
(note - no versions on comments)
I think that's the basics. These assertions drive ERDs in ORM. Hopefully it's self-evident how they drive the User Stories as well.
I don't think an implementation of a normalized design like this would require denormalization - especially since I think it's clear (from behavior) that queries => UI displays are cached to be refreshed 1X per minute.