I am working in Ektron 8.6.
I am working with Multivariate Experiment.
After setting an experiment, I could see proper results in experiment widget (both HITs and Conversions). But the issue is with multivariate_report table, it’s not updating HIT or Conversion over there. Problem if it’s not updated on the DB is that you will lose the results if you restart IIS (SESSION is lost). While digging deep into the same I could find that the GUID in table data is not matching with variant GUID. Seems that it’s a BUG with Ektron. I am using Version: 8.60 SP1(Build 8.6.0.060) of Ektron.
Did anyone else had similar issues with multivariate test?
Related
Our organization has used Pentaho for data integration purposes for over a decade. Due to organizational changes earlier this year, our team should take responsibility for data integration going forward.
The problem is, our team has nearly zero experience with Pentaho and use Frends for data integration. We want to move everything out of Pentaho to Frends. But alas, Pentaho seems to be a fairly challenging customer.
Problem 1: There's a lot and I mean A LOT of shit in Pentaho. The depth of some jobs is quite staggering, at least by our standards.
Problem 2: There is zero external documentation.
Problem 3: There is almost zero documentation within the jobs.
Problem 4: Pentaho is slow. Switching between open tabs easily takes 10 seconds. Opening a new tab 20 seconds, closing a tab 10 seconds.
Problem 5: A job within a job may be named as ABC, but when you open ABC, it is actually named as DEF. Combine this with Problem 4 and trying to keep up where you are, and where you were is hard.
At first I tried to document relevant jobs into our wiki using a tabular structure. But it quickly became a mess that was nearly impossible to navigate. Considering the outcome, spending any more time to manually document Pentaho seems stupid.
Is there a way to generate (hopefully readable) documentation automatically? I googled for answers and found this page. But considering the zero experience with Pentaho, I do not know if that page describes what I am looking for, and the instructions are written in rather broad strokes I have no idea how to apply in practice.
Thank you in advance.
Re your problem 4: there's no reason for Pentaho to take that long opening or switching tabs. Unless you're running it on particularly slow VMs with little memory, it shouldn't take that long. It's not the quickest gui out there, but it should be fast enough to allow working.
The only documentation tool I know of is the one you mention on your post. But as far as I know it's quite old and I don't know how well it'll work for your version. Give it a try, and see how it goes.
Other than that, if there's no doco available, you have litle recourse other than going through the various jobs and sub-jobs, transformations and sub-transformations and reverse engineer them.
I have a challenge from a customer. They'd like me to be able to support the creation, editing and deletion of custom fields against a Model within my application. This is so they can record any values that they see fit against a model and report against them.
I've tried a few methods of completing this however neither allowed for easy integration into our existing reporting engine and I'd quite like to keep that as untouched as possible.
I've scoured the web for similar answers to this question and most were from 2015 - 2018 and most were asking for dynamic models to be supported as well and frequently the answer was "No not yet but we will support this in future".
To support dynamic models I have seen solutions that reload waterline via sails.hooks.orm.reload() or even pro-grammatically restarting the sails app altogether and initialising again with the new model present. This could cause a few seconds of downtime for your application as the server restarted (for us it is about 30 as we have a huge app with nearly 50 models and 350 routes). We can't really go with this process as that down time is too much of an overhead and our clients would be a bit miffed.
Is potentially supporting just the dynamic handling of attributes possible without all the bootstrapping that takes place on sails lift or is it just not feasible? My initial feeling from all my reading is essentially no - as the processes in place during startup hook up and sync waterline/your db/your files.
Any discussion or examples of similar work welcomed.
We've recently started to support a PowerBuilder 10.5 application and the question has come whether or not we should think about alternatives or keep the app running in PB 10.5. It is a classic PB app; an administrative software, build upon an Oracle DB.
Right now, the app works great, but there are two reasons why we reconsider:
The sole developer of this app is about to retire. He's the only one
who has all the PB-knowledge to support this app.
We might want to improve the offered services of the app. So integrations with other tools are right around the corner.
I'm not very familiar with PB, but I've read it (only the newest versions) is now supported by Appeon. The latest version is now 2017 R3, with a 2019 version coming up.
I'm wondering what the pro's & con's are of trying to update the current 10.5 version to the latest version. Is it worth it to update? Or what are the pro's & con's of sticking to the 10.5 version?
Or should we consider moving to a newer technology, since so few Powerbuilders are to be found nowadays? And if so, what technology would you advise?
Rather than just differences between the older and newer PB-versions, I'm looking for motivations to upgrade/migrate/do nothing at all.
Thanks.
So, there's no clear cut answer, but we can throw around some ideas on the non-technical bullet points (as requested).
Staying on 10.5: There's a lot to be said for "if it ain't broke, don't fix it." If it works and you're happy with what it does, don't move it.
However, since you've said that you're planning on moving it forward, you might want to consider that 10.5 doesn't support current operating systems (within a year, Windows systems currently supported by MS will be only Win8 and Win10), which were nothing but figments of imagination when 10.5 was out. Your 10.5 app may work on Win10 now, but that's solely because of MS's work on backward compatibility for apps, and that you haven't leveraged an area in PB that had a problem in a then-future version of Windows. If you need to add functionality, being on a version that at least suggests that it works on your operating system could be helpful.
Parallel argument for databases, the exception being that if your app uses SQL Anywhere, the database that used to come for free in several PB packages. It is now something you'd have to purchase separately.
One thing to remember about trying to move forward with an old version of anything is support. If you get stuck, the vendor will basically not talk to you, and the peer community has been shrinking, so you've got less chance of getting into a dialog with fellow developers.
Upgrading: Upgrading is usually a minor effort. The most frequent reasons I've seen exceptions to this: deprecated functionality, and coding that depends on behaviours that didn't stay consistent between versions (some behaviours are promised to stay consistent, but not all). Run a migration test with a trail version with your PB expert to get that question off the table.
One thing to keep in mind when upgrading is that the licensing model has changed. PB used to have a perpetual model (buy once, use forever), but it's now a subscription model. Whether this is an improvement for you or not is up to you to figure out.
Whether it is "worth it" to upgrade, in my mind it usually boils down to
OS support
DB support
vendor support
peer support
deprecated features, and whether I use them
new features, and whether I would use them (and you asked us not to discuss these last two items, which need to be weighed very individually anyway, and are well documented on Appeon's site)
"Migrating": I've put "migrating" in quotes, because I don't believe there's a technology that lets you "migrate" in the sense of a code translation. (I'll let you read one of my old tirades about wanting to "migrate" off PB.) What I'll talk about here is rewriting in a new technology. Both pulling business rules out of an old PB system and redesigning/rewriting in another technology is a big effort.
The biggest argument in favour these days is getting and keeping PowerBuilder talent. Getting people with PB under their belt is hard, and judging legitimate talent is challenging, even with someone with PB on your side of the interview table. (Leverage your retiring guy if you want to move forward with PB.) Training someone with PB is no small task either. Someone once asked me, not an educator, if I could come up with a course and train his team in a week. I laughed. After a two week course designed and given by professional educators from the then-vendor Powersoft, I came home and wrote incredibly embarrassing code. I also needed lots of time practicing, and getting feedback from my peers. If you can get someone or train someone, if they are only doing PB work a couple of weeks per year, those PB "muscles" will atrophy. No matter the technical arguments of PB vs something else, if you can't get PB talent to maintain it, PB is a dead end.
I'm afraid I'm not one to suggest an alternative technology. It used to be that, in terms of of rich client apps, you couldn't go wrong with choosing Microsoft, but since then, MS has sent the development community on some wild goose chases, that have ended in deprecated technologies. I wouldn't want to be the guy looking into the future to guess.
Good luck.
I would recommend migrating.
You will find several companies that offer migration to both java and .net which are the leading platforms.
In terms of UI for me currently the only option is web. Using other technologies does not make a lot of sense.
If your company uses a lot of MS stack, like MS OS, SQL server. Exchange, Sharepoint etc I will recommend migrating to C# otherwise migrating to Java makes more sense
Terry's answer is quite good but the point about migration was not addressed with respect to the new features in PowerBuilder 2019.
One major feature of PowerBuilder 2019 is a C# DataStore (compatible with .NET Core) and DataWindow object migration utility. The C# DataStore has the same APIs and transaction mechanism as the PowerScript DataStore. It is documented in detail on the Appeon Website: https://www.appeon.com/support/documents/appeon_online_help/powerbuilder/api_reference/PowerBuilder.Data/DataStore/IDataStore/IDataStore.html
Should you decide C# is the way to go, this feature of PowerBuilder 2019 makes the migration effort a "port" of the PowerScript non-visual code rather than a rewrite (for the reasons mentioned above).
Here is example PowerScript code:
public function datastore of_retrieve (date ad_start, date ad_end, decimal adec_amt);
Datastore lds
lds = Create Datastore
lds.dataobject = "d_order_customer"
lds.SetTransObject(SQLCA)
lds.Retrieve(ad_start, ad_end, adec_amt)
Return lds
end function
Here is the same example in C# using the C# DataStore:
public IDataStore GetOrderCustomerInfo(DateTime startDate, DateTime endDate, decimal amount)
{
IDataStore dataStore = new DataStore("d_order_customer", _context);
dataStore.Retrieve(startDate, endDate, amount);
return dataStore;
}
Looking into Meteor to create a collaborative document editing app, because it’s great that Meteor synchronizes data between multiple clients by default.
But when using a text-area, like in Sameer Kalburgi’s example
http://www.skalb.com/2012/04/16/creating-a-document-sharing-site-with-meteor-js/
http://docshare-tutorial.meteor.com/
the experience is sub-optimal.
I tried to type at the same time with a colleague and my changes would be overwritten when she typed and vice versa. So in the conflict resolution there is no merge algorithm yet, I think?
Is this planned for the feature? Are there ways to implement this currently? Etherpad seems to handle this problem rather well. Having this in Meteor would make creating collaborative document editing apps way more accessible.
So I looked into it some more, the algorithm used in Etherpad is known as Operational Transformation:
The solution is Operational Transformation (OT). If you haven’t heard of it, OT is a class of algorithms that do multi-site realtime concurrency. OT is like realtime git. It works with any amount of lag (from zero to an extended holiday). It lets users make live, concurrent edits with low bandwidth. OT gives you eventual consistency between multiple users without retries, without errors and without any data being overwritten.
Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. We need some good libraries, so any project can just plug in OT if they need it.
Thats’s from the site of sharejs. A node.js based ot server-client that you can hook into your existing client.
OT is also implemented in the Racer model synchronization engine for Node.js, that forms the underpinnings for Derby. At the moment, derby.js doesn’t transparently provide it yet, but they plan too, from the Derby docs:
Currently, Racer defaults to applying all transactions in the order received, i.e. last-writer-wins. (…) Racer [also] supports conflict resolution via a combination of Software Transactional Memory (STM), Operational Transformation (OT), and Diff-match-patch techniques.
These features are not fully implemented yet, but the Racer demos show preliminary examples of STM and OT.
Coincidentally, both the sharejs and derbyjs teams have an ex Google-waver on board. Meteor has an ex etherpad/Google Waver in their core team. Since Etherpad is one of the best known implementations of OT I was imagining Meteor would surely want to support it at some point as well…
I've created a Meteor smart package that integrates ShareJS:
https://github.com/mizzao/meteor-sharejs
It's quite preliminary right now, but you can import it into your app, drop in textareas, and it "just works". Please try it out and submit some new features via pull requests :)
Check out a demo here:
http://documents.meteor.com
What you describe seems out of Meteors scope for me. Its not a tool to set up collaboration possibilities!
What it provides is a way to transparently work against a subset of a servers database. But the implementation of use-case specific merging functionality is the job of the application, not the framework.
Our testing system is pretty rudimentary; fire up a browser, see if it works. Recently we ran into problems, found by our client, with our application where the number of users created a slow-down in the application. The application is basically a huge Word document with people editing their own versions all at the same time. Part of the problem came from not knowing how to test multiple instances at the same time. My partner and I thought about how to test this; one idea was to hire out an internet cafe and hire students for an hour to bang on the app.
What are other ways that people have tried to emulate concurrency in testing their web-based application? Most of the advice here is for specific methodology; I'm asking, how do you test it to make sure that it works?
If you have never checked out Selenium, then you need to. It will allow you to do automated web testing through the browser. Ok, so first problem solved.
Now ideally you could use that same script and load it up on a bunch of boxes and run them all at once to get some sort of load testing right? Luckily for you someone has already figured this out, although it is a paid service: Browser Mob. But, it looks like you were willing to spend a little money to do this anyway, and would probably net you better, more repeatable results.
We usually answer the question "can the web application do more than one thing at a time" by using JMeter to produce a simulated HTTP load on the web server.
I find that it helps to consider distinguish several different types of testing; concurrency (what happens when two events in the system collide), capacity (what happens when there are many overlapping requests), volume (what happens as data accumulates in the system)...
Huge general slow down, evidenced by response times that fall outside of the SLA, are usually related to capacity problems (with contention as a common cause) or volume (many users, much data, and the system gets slower over time). The former usually requires some sort of multi-threaded request stream; the latter you can usually manage by preloading the volume, and then measuring the response times experienced by a single user.
I generally find that separating the load generator from the actual measurement/instrumentation is a good idea. That can be as simple as having a black box over there to generate a typical load, and sitting here with a stop watch measuring the responsiveness of a typical use case.
JMeter http://jmeter.apache.org/