I'm doing a presentation which much of it involves me coding with a keyboard I was hoping in order for clarity and to reduce potential error, it could be possible not have to retype some complex sections but have them 'typed' for me at a readable speed so that I can still talk over when it is being outputted.
Looking at intellij's macro it would work perfect except it run too fast for me to talk over, is there any other tools that you know of that could assist in this?
Thanks,
Ian.
Some people create ad-hoc live templates for their presentation that insert the whole code snippets for them. It's not human speed though.
You could try to create a small plugin with a single action which does Thread.sleep(...) synchronously and invoke it in your macros to slow them down. It's a bit tedious though.
You can split your live templates or macros in small chunks and comment after invoking each of them. This should add some continuity to the presentation and not let your listeners lose the focus.
Related
I'm trying to get an idea of when it is worth developing classes for a program in VB.
Assuming that you have a a single form program that's not too big, is it bad practice to write the whole thing in that form's module in VB?
To give an idea of program size, say it includes two timer objects, subroutines functions and form control methods all in this single module, that instantiate various objects including HTTP request calls. [Timer tick intervals at around 5 seconds doing http request calls]
I think there is nothing wrong with using no design patterns for a small program.
However I always try to implement design patterns, even for small projects. the reason is, even small projects can be a chance to advance your skills. If I have a time limit I cast this reasoning aside. So I guess its up to you. It will definitely take you more time to develop a small program with design patterns, but as I mentioned you are discarding potential practice time (:
and who knows what your so called small project will look like in one year. Maybe you want to make it bigger. So you can plan in advance or refactor the whole thing when you realize it is time for it. I guess both ways are fine, but I definitely prefer more complex architecture from the beginning
There is no real right or wrong answer. For small programs like yours, I would say it's probably not worth it to get too elaborate, so yes, I think it's OK to put all your code in the form class. For larger applications, you should definitely consider using one of the well-thought-out and proven design patterns such as MVP, MVC, Presentation Model, etc. There are a lot of resources out there on the web that cover these, but the fundamental idea is that you can gain a lot of benefits by separating your UI from your presentation logic code and your domain logic code.
Please let me know the difference between the hand written code and recorded scripts in automation testing tools like coded ui or any other tools.
Regards,
Raj
By 'hand-written' I'll assume you mean manually coded...
I can see a few reasons. Coding experience is brilliant. It will be a worthwhile investment if you code your own tests because you can learn a lot about the testing framework you are using (CodedUI, Selenium etc) but also the language you are using (Java, C#). Manually coding these tests, using built in framework methods, will serve you well and give you much more knowledge than an automatic play back tool would.
Automatic playback tools can produce horrific code. Code that is ugly, badly named, no best practices followed, and unreliable location methods.
Playback tools will simply use the most simple way to find an element. This is not always the best. A classic example is XPath.
Most notably, XPath is a powerful tool, it can get you any element you need (or at least, I've never found a situation where XPath cannot be used), but playback tool's will produce horrific XPath queries based purely on position...let's take an example.
You've got a page that has 100 feed items. You want to verify after a particular action a feed item is shown on this page, but not only is it shown but it is the first one. You cannot use ID's etc, because the markup is badly made and so you must use XPath.
A playback tool might make a very odd XPath like: //div[1]/span[2]/table[1]/tbody[1]/tr[10]/[td[2]/a[text()='Test'].
Looks weird, right?
This will work a few times, but what happens if the app gets another tr element shoved at the top of the table? Now, tr[10] wont be the element you want, it'll be tr[11].
Through manual coding, you can account for this, you can put in logic to work around this. Playback tool's wont.
I highly recommend coding these tests yourself. You do not need a few years experience to do this, you do not need any prior programming degrees. You need time.
Playback tools will also be limited in what they can do...you want to take a screenshot when a test fails? I highly highly doubt a playback tool will do this, you'll need to put in logic yourself. However, this isn't hard to do yourself.
There might be a business reason too - playback tools can convert manual tests into automated tests faster, but they won't be reliable - you'll need to have time to dedicate to making them reliable and fast. Time that would better be spent coding them yourself in the first place.
I am working in Aptana Studio three and am wondering if there is a quick easy way to minimise your code when finished developing?
Say you have a long SQL insert that you have separated out over several lines for ease of editing during development. At production time, you could go through and remove all your line breaks to compress the code into less lines but this could be a very time consuming manual task.
Just wondering if there is an easier way?
I don't know whether Aptana has this feature but I doubt it will give you any benefit to do this.
Generally putting code onto fewer lines doesn't actually give any benefit apart from when this code must be transferred over a network (e.g. to a web browser) in which case you save a few bytes of network traffic.
I have a busy set of routines to validate or download the current client application. It starts with a Windows desktop shortcut that invokes a .WSF file. This calls on several .VBS files, an .INI for settings, and potentially a .BAT file. Some of these script documents have internal functions. The final phase opens a Microsoft Access database, which entails an AutoExec macro, which kicks off some VBA, including a form which has a load routine of its own in VBA.
None of this detail is specifically important (so please don't add a VBA tag, OR criticize my precious complexity). The point is I have a variety of tools and containers and they may be functionally nested.
I need better techniques for parsing that in a flow chart. Currently I rely on any or all of the following:
a distinct color
a big box that encloses a routine
the classic 'transfer of control' symbol
perhaps an explanatory call-out
Shouldn't I increase my flow charting vocabulary? Tutorials explain the square, the diamond, the circle, and just about nothing more. Surely FC can help me deal with these sorts of things:
The plethora of script types lets me answer different needs, and I want to indicate tool/language.
A sub-routine could result in an abort of the overall task, or an error, and I want to show the handling of that by (or consequences for) higher-level "enclosing" routines.
I want to distinguish "internal" sub-routines from ones in a different script file.
Concurrent script processing could become critical, so I want to note that.
The .INI file lets me provide all routines with persistent values. How is that charted?
A function may have an argument(s) and a return value/reference ... I don't know how to effectively cite even that.
Please provide guidance or point me to a extra-helpful resource. If you recommend an analysis tool set (like UML, which I haven't gotten the hang of yet), please also tell me where I can find a good introduction.
I am not interested in software. Please consider this a white board exercise.
Discussion of the question suggests flowcharts are not useful or accurate.
Accuracy depends on how the flow charts are constructed. If they are constructed manually, they are like any other manually built document and will be out of date almost instantly; that makes hand-constructed flowcharts really useless, which is why people tend to like looking at the code.
[The rest of this response violate's the OPs requirement of "not interested in software (to produce flowcharts)" because I think that's the only way to get them in some kind of useful form.]
If the flowcharts are derived from the code by an an appropriate language-accurate analysis tool, they will be accurate. See examples at http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html These examples are semantically precise although the pages there don't provide the exact semantics, but that's just a documetation detail.
It is hard to find such tools :-} especially if you want flowcharts that span multiple languages, and multiple "execution paradigms" (OP wants his INI files included; they are some kind of implied assignment statements, and I'm pretty sure he'd want to model SQL actions which don't flowchart usefully because they tend to be pure computation over tables).
It is also unclear that such flowcharts are useful. The examples at the page I provided should be semiconvincing; if you take into account all the microscopic details (e.g., the possiblity of an ABORT control flow arc emanating from every subroutine call [because each call may throw an exception]) these diagrams get horrendously big, fast. The fact that the diagrams are space-consuming (boxes, diamonds, lines, lots of whitespace) aggravates this pretty badly. Once they get big, you literally get lost in space following the arcs. Again, a good reason for people to avoid flowcharts for entire systems. (The other reason people like text languages is they can in fact be pretty dense; you can get a lot on a page with a succinct language, and wait'll you see APL :)
They might be of marginal help in individual functions, if the function has complex logic.
I think it unlikely that you are going to get language accurate analyzers that produce flowcharts for all the languages you want, that such anlayzers can compose their flowcharts nicely (you want JavaScript invoking C# running SQL ...?)
What you might hope for is a compromise solution: display the code with various hyper links to the other artifacts referenced. You still need the ability to produce such hyperlinked code (see http://www.semanticdesigns.com/Products/Formatters/JavaBrowser.html for one way this might work), but you also need hyperlinks across the language boundaries.
I know of no tools that presently do that. And I doubt you have the interest or willpower to build such tools on your own.
Wizards can kick-start features. They can also obfuscate your code, and are anti-YAGNI.
On balance, do you think Wizards are more useful or more harmful?
They are more useful than harmful if and only if you understand the code they generate.
Only after you mastered the problem the wizard is trying to solve they are really useful.
Otherwise you'll hit walls later in the project, because the generated code will need modifications at some point.
"The Law of Leaky Abstractions" really nails it on the head.
They're there for a reason - to try and make your life easier.
They can be useful and save you 5 or 10 minutes of typing. Of course it's best to read and make sure you understand what they've written for you.
If you use them without understanding, then they could be considered harmful in the fact that they're letting you get away with not learning something you should probably know, but on balance I think they're a good thing.
Wizards are good if and only if you can get away with never editing the code that they generate. In that situation, they are in essence a very high level programming language. When you change your mind about something that was generated by the wizard, you can run the wizard again.
Wizards are most horribly evil if you must ever edit the code that they generate. If you do this, and later change your mind about one of the choices that you made in the wizard, then you are forced to choose between two very bad options. You can rerun the wizard, and reapply the manual edits, or you can try to edit the multiple copies of the boilerplate code that the wizard created the first time around. In the former case, you are likely to forget at least one of your edits, and in the latter case, you are likely to miss at least one of the places in the code that was affected by your choice at wizard running time.
Wizards are "mostly harmless" when they generate an encapsulated entity - a function, a class or a set of classes - which you don't need to modify and which you interact with through a well-defined, well-designed interface.
On the other end of the spectrum is a wizard that that generates skeleton code that needs to be extended and modified. This is especially troublesome if you can't change some of the wizard option later without losing your edits.
These are still "ok" for the pro who could write the same code by himself and uses the wizard to save time. However, when they are used to make something complicated look easy for beginners, they are a paint job over a rusty car: they help selling something that you otherwise wouldn't buy.
In practice, they may still be useful to ease adoption of a platform. But that's a business aspect, and whether business aspects may justify code blunders is a question of the development environment.