Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am finishing a Cocoa App which will use CocoaFob for licensing and I am wondering about the "most" efficient and secure way to implement a trial period in cocoa.
Thanks in advance for your help,
Regards,
for security you need to make sure it is not in an easy to spot method, as this can be switched out at runtime. ideally it should be checked in multiple places, where disabling/modifying the method would disable Important chunks of the application(ie loading initial data).
Having said that, how much do you want to risk inconveniencing a genuine user? and how much time can you justify spending, working on something that doesn't give someone a reason to buy your application?
you also have to make sure that keys don't get redistributed, and realistically if someone is determined enough, they will pirate your application one way or another. spend just enough time to keep honest people honest.
also bare in mind that a trial version wont be able to be submitted to the mac app store, and neither will a version with license key management, so you will either be cutting your self out of that market, or distributing a version without license keys, which may get cracked anyway.
hopefully this helps, and i would be interested in reading what solution you decide to go with.
If your going to implement a time based demo consider using one based on processor time and not an absolute date.The idea being, the user can use your application fully for, say 4 hours, of CPU time. That way they are not locked into a 'must decide by date'. I often have downloaded soomething to look at, then later tried to really use it only to have the trial date expired.
It's not that hard to implement and I'm sure users would appreceiate more.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Currently I am working on porting a benchmark application to another system. I am working alone, so I am frustrated about which software methodology I really have to use. Please give me some ideas.
I am going to assume you're wondering which Agile approach to use on your project as you tagged your question accordingly.
Agile is mainly about:
Delivering working software continuously and regularly
Aiming at technical excellence and avoiding technical debt
Improving the way we work and retrospecting regularly
I'd say whatever you use, even your very own approach to software development, if you can check those three items from the list, then you're pretty much Agile to me. Some people need strict guidelines and artifacts and that's fine, they help people become Agile but are far from being mandatory despite the dogmas out there.
Here's how I would approach your situation.
Take a step back and try to identify the most important features or abilities of this benchmarking application. By most important, I mean those features that the people using it in the end cannot live without. Once you have a list of those, put them on post-it notes, index cards, trello, jira or whatever tool you want to use.
Split each of those features into full-stack chunks of functionality that are business driven. I'm not talking about technical tasks here, but smaller features usable by actual people. I usually opt for the "Grandma Driven" approach here, asking myself "would grandma be able to understand what I'm trying to do?". It's just to make sure I'm focusing on a full stack feature and not a technical task like "populate database". One way to see this is also by applying dimensional planning to each of the features you identified (http://www.xpday.net/Xpday2007/session/DimensionalPlanning.html).
Set yourself an iteration length (I usually go for 1 or max 2 weeks when I'm working alone) and get to work one small item at a time. Don't write code for later, only what you need to solve the problem at hand. Quality is not an option. Focus on good coding and testing practices.
At the end of your iteration, check how many features you implemented and put that number somewhere on a chart, in a google spreadsheet or whatever. This will help you see if you're on track. Get feedback from colleagues or any potential users of the system and reflect on that feedback. It's not because you're porting to another platform that you can't make it better.
If you end up not having small enough granularity with what's left or not enough stuff in your list of things to do, spend some time repeating steps 1 to 3.
At the end of each iteration, keep tracking how many items you did just to see if you still have a good enough pace. If not, ask yourself why and change something in the way you work or get help. Again, your main focus is to make progress and deliver software that works at the end of each iteration.
It might not answer your question and I know I didn't give you an answer of the type, use kanban, scrum or whatever but I truly believe it's not appropriate in your specific case and would only generate overhead and boredom for you.
Hope that helps anyway, good luck with your project.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I wanted to ask you, experienced programmers, a question that bothers me recently. I'm a second year student of the university of technology, where we spend a lot of time learning how to code. I found creating small but practical applications the best way to learn, and sometimes I would like to give them for free to someone else. And here the problem appears. If someone want to use it but is afraid that the app is not safe, I don't know any other way how to prove that it's not harmful but to show the source. It's not a big deal for me since those apps are not that big and complex, but I'm wondering if there is any way to show that the program is fully safe without sending the source code. It's basic stuff I guess, sorry if it sounds stupid an obvious.
Thanks.
Edit:
By "safe" I mean it's not a keylogger or anything like that.
Even if you strive hard to keep your app safe, when the underlying OS is vulnerable, it goes vain! So if you expect that trust, probably, if you have restrict your app to be in platforms, that you believe to be trustworthy.
For keyloggers you mean, show only virtual keyboard of your own app, donot use the system's default. Encrpyt evrything (data) you send from your app. Create a checksum value for you app, and when someone tampers it, make sure , your app recognises it and makes it unusable, till reinstalled. Have a pre installer to validate the platform, your app is being installed.
Never allow, external sources to access the app content. Secure your critical content, in a encrypted container.
may be the below link, provides some more insight!
http://www1.good.com/good-dynamics-platform/
Basically, this is the same question as "how do I know anything is safe". Consumer appliances get recalled periodically, but we trust that they aren't deliberately designed to catch fire. If you aren't sure you trust it, you run it only when/where you can keep an eye on it and/or isolate it so it can't damage more than itself, or you throw it out.
If people don't trust the source of their code, they have two choices: Don't run it, or run it in a highly isolated environment.
The latter is a large part of what the Java Applet and Java security environment is about, but of course that does require that you trust whoever wrote the browser and/or set up the security environment to have done that successfully, and you have to trust that those don't have bugs that can be exploited.
If you're talking about products... There have been various practices published from time to time regarding how code should be written, and tested, for robustness. These days those may include "white hat" security attacks along with full code inspection and so on. If you can show that you're following these practices, it may reassure some folks who otherwise wouldn't trust you... but doing them with full rigor can be expensive, so part of this is knowing what your customers expect and/or will tolerate.
In the end, the real answer is that you need to start by writing trustworthy code, then know what the customer's concerns are and make sure you can meet their requirements either by delivering perfect code and/or by delivering above-average service and/or by being... no, I'm not going to take a cheap shot at that company this time.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I work at a software company that maintains some products.
We use a "bugtracker" to manage all tasks related to the products in question.
We work with Scrum, and the company's routine is basically the following:
The customer comes in contact with the support and requests to solving a problem or implementing a feature.
The owners of the product group the tasks in order of priority and directs them to a Sprint.
Developers finalize the task and ultimately are required to fill out a kind of "changelog".
The testers ensure that the coding of the developers was done correctly and end the call.
Here is my problem:
Developers do not like to fill the "changelog", and usually forget to do it.
Here is my question:
Who should complete the "changelog"? The developers and testers?
This "changelog" is sent to end customers at the end of each Sprint, and basically serve to explain in nontechnical what has been resolved or implemented in software.
And then, who should do it? Developers and testers?
This isn't a Scrum question, it reads like a process question. Can I also state that Scrum doesn't really lend itself well to maintenance work and that you may be better trying Kanban in that situation?
That said, although Scrum does not include any reference to changelog artifacts, I'd say that it's a team responsibility to ensure that the changelog is updated (as opposed to any one developer or tester). To act as a reminder, The team may want to consider adding this requirement in to their 'Definition of Done'.
Hope that helps.
Find out what works best for your team. It seems odd that developers/testers would be communicating with customers directly. I would expect that to be the role of the support team who was originally in contact with the customer.
As you said in your comment, they are likely dragging their feet because it's not what they are good at and thus they don't like doing it.
A couple things to try:
Put everybody in a room and talk it out (doesn't work if there are too many people - maybe just get the dept heads). We need this to get done, it isn't getting done, why not and who has ideas how to fix this?
I'm not sure why the customers even need a description of what was changed - I'm picturing a "how we fixed it" situation. Who cares how, just that it is fixed. I'm saying to re-examine if this is necessary - perhaps there is an easier suitable substitute.
Try to automate it. If the customer does need a hand holding explanation of how it was fixed and all they really need to know is that it was fixed, perhaps you could automate your bug tracking tool such that the customer who reported the issue is notified when that ticket is closed - or rather, when it is deployed and visibly fixed for the customer.
Biggest piece of advice is to not make this a blame game situation. Your coworkers aren't unreasonable people - if they are resistant, then perhaps the process is too heavy. Be open to alternative solutions.
FYI - This kind of question may do better over at pm.stackexchange.com
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm starting project number 8,192. Like most of my projects, they are either throw-away projects or projects that get canceled either from boredom, time or lack of usefulness.
But there is a project that has been on the back-burners for a long time that I really want to finish. In my perfect world mind, it should take 3 months for first release.
Anyway, one of my biggest issues is taking a large project (or even a small to medium one) and break it down into manageable pieces. My error is to always jump right on the terminal, open Textmate and start coding. This almost always fails. I get lost in feature creep, learning newer methods, framework wars, etc. Then, two months have gone by and nothing to show for it.
So I was thinking if BDD (such as Cucumber) might be a solution to this? Could it be used to scope out the larger pieces, then the smaller pieces until I have a feature list that is most of the project. At that point, I just start coding the pieces right?
What are your suggestions on tackling this problem that I'm sure other developers share.
BTW, I'm using Rails 3 (sometimes Padrino).
Thanks
On which track? BDD doesn't define the track--it communicates the track.
BDD may be the only requirements you have (or need), but that doesn't address the issue of feeping creaturisms unless you have the discipline not to implement anything for which no spec exists.
Uncaptured features don't get implemented, period. If a feature is added, it gets a scope, and is prioritized with the rest of the features. It may usurp something less-desirable, it may not.
The product owner (you in this case) must decide how much can be implemented in the time allotted, and which features should be implemented. Still boils down to discipline, however, you just have a tool that (helps) make sure what you implemented is what you actually wanted.
It doesn't, however, make sure that what you get is only what you originally wanted--it won't make sure nothing else is implemented on top of the specs you bothered to implement.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a project hand over from on shore team to our team (off shore) not long ago. However we were having difficulties for the hand-over process.
We couldn't think of any questions to ask during their design walk-through, because we were overwhelmed by the sheer amount of information. We wanted to ask, but we didn't know what to ask. Since they got no question from us, the management think that the hand-over process was been done successfully.
We had tried to go through all the documentation from our company wiki page before attending the handover presentation, but there are too many documents, we don't even know where to start with.
I wonder, are there any rules or best practices that we can follow, to ensure a successful project hand-over, either from us, or to us.
Thanks.
In terms of reading the documentation, personally I'd go for this order:
Get a short overview of the basic function of the application - what is it meant to achieve. The business case is probably the best document which will already exist.
Then the functional specification. At this point you're not trying to understand any sort of how or technology, just what the app is meant to do. If it's massive, ask them what they key business processes are and focus on those.
Then the high level technical overview. This should include an architecture diagram, required platforms, versions, config and so on. List any questions you have.
Then skim any other useful looking technical documents - certainly a FAQ if there is one, test scripts can be good too as they outline detailed "how to" type scenarios. Maybe it's just me but I find reading technical documents before I've seen the system a waste - it's too academic and they're normally shockingly written. It's certainly an area I'd limit the time I spent on if I didn't feel I was getting a reasonable return for the time I was spending.
If there are several of you arrage structured reviews between you and discuss the documents you've read, making sure you've got what you need to out of it. If the system is big then each take an area and present to the others on it - give yourselves a reason to learn as much as possible and knowing you're going to be quizzed is a good motivator. Make a list of questions where you don't understand something. Having structured reviews between you will focus your minds and make it more of an interactive task, rather than just trawling through page after page of tedious document.
Once you get face to face with them:
Start with a full system demo. Ask questions as they come up, don't let them fob you off with unclear answers - if they can't answer something have it written down and task them with getting the answer.
Now get the code checked out and running on your machines. Do this on at least two machines - one they lead, one you lead. Document the whole process - this is the most important step. If you can't get the code running you're screwed.
Go through the build process. Ensure that you can build the app (including any automated build and unit tests they may have). Note that all unit tests should pass - if they don't or if they say "oh, that one always fails" then they need to fix that before final acceptance.
Go through the install process. Do this at least twice, one they lead, once you lead. Make sure that it's documented.
Now come up with a set of common business functions carried out with the application. Use this to walk the code with them. The code base will be too big to cover the whole thing but make sure you cover a representative sample.
If there is a database or an API do a similar exercise. Come up with some standard data you might need to extract or some basic tasks you might need to carry out using the API and spend some time working through these with them.
Ask them if there's anything they think you should know.
Make sure that any questions you've written down anywhere else are answered.
You may consider it worth going through the bug list (open and closed) - start with the high priority ones and talk through anything particularly worrying looking. Even if they've fixed it it may point at a bit of code which is troublesome.
And finally if the opportunity exists - if there are any outstanding bugs or changes, see if you can pair program a couple.
Do not finally accept the app unless you are 100% sure you can:
Get the code to compile
Get the code to build (including the database)
Get the application installed
Do not accept handover is complete until they have:
Documented anything you picked up on that wasn't covered to your satisfaction
Answered ALL of your questions - a question they won't answer after being asked repeatedly screams of something they're hiding
And grab their e-mail addresses and phone numbers. Even if it's only informal they'll probably be willing to help out if the shit really hits the fan...
Good luck.
My basic process for receiving a handover would be:
Get a general overview of the app, document it
Get a list of all future work that the client expects
... all known issues
... any implementation specifics
As much up-to-date documentation they have
If possible, have them write some tests for critical components of the system (or at least get them thoroughly documented)
If there is too much documentation (possible) just confirm that it is all up to date, and make sure you find out from them where to start, if it is not clear.
Ask as many question as possible; anything that comes to mind, because you may not have the chance again.
Most handovers, perhaps all of them, will cause a lot of information to be lost. The only effective way to perform a handover that I have seen is to do it gradually. One way to do it is to allow a few key people from phase One to stay on the project well into Phase Two.
The extreme solution is to get rid of all handovers, and start using an Agile mindset.
As a start, define the exit criteria for the handover. This should be discussed, negotiated and agreed with both parties and make sure higher management knows this. Then write up a checklist of all things needed to achieve the exit criteria and chase it.
Check out "Software Requirements" and Software Requirement Patterns for ideas on questions to ask when gathering information about a project. I think that just as they would work for new development, they would also help you to come to terms with an existing project.