How to understand the source code of Jitsi to modify or add some features as per requirements - jitsi-meet

I would like to modify/customize the Jitsi. For that, I would like to
understand the calls involved and other interfaces communicating with each other.
It would be a great help if anyone suggests what is the best way to proceed or
any code browsing and call graph generation tools. Also, it would be a great
help to know, what is the best way to set up a development and test environment
for all the components(prosody, jicofo, jvb, jibri, jigasi etc.) involved.

Related

What language to choose for SaaS API?

I work in a small organization that has built an enterprise SaaS solution. Up until this point our workflows have had no programmatic interface. We're moving to a model that will allow for an end user to do anything programmatically that can be done in the UI. I'm looking for suggestions in terms of the language/framework that you would use to build that programmatic layer.
From an organizational perspective I would like the current UI team to also have ownership of the API. That team is familiar with PHP, Rails, and Javascript. Our current back-end code is written in Scala. I'm leaning toward not doing the APIs in Scala because it doesn't seem like the right tool for the job and the lack of subject matter expertise around it on the UI team.
From a functionality perspective most of the APIs will be fairly simple database operations (CRUD) with perhaps some simplistic business logic applied on top (search for example).
I'm a bit intrigued by using Node.js for this as everyone on the team is really strong with Javascript. That being said I don't just want to hop on the semi-new technology bandwagon. Because it is enterprise software, unit testing frameworks, reusability, and extendability are all important considerations as well.
Any suggestions?
I realize this question was about technology options, but there's a fundamental concern that seems really important to call out:
From an organizational perspective I would like the current UI team to also have ownership of the API.
While this sounds like a logical approach, it may not work out well unless you're UI team is made up of really solid engineers. SaaS API development is arguably one of the most challenging aspects of modern software design. A great API will make everyone's lives easier, while a poor API will bring your system to its knees and leave you completely clueless as to why.
As a quick example, if you don't solve the end user's needs in the right way, you're likely to force a number of n+1 problems on them (and thus, on you.)
There is a bunch of great material out there about how to design great APIs and even more about the pitfalls of designing a bad one. Generally speaking, most of the UI devs I've worked with, particularly ones that are only familiar with scripting languages, are not people I would entrust to API design. Instead I would utilize them as customers (in a Scrum sense) who guide the design by describing end-user needs.
I faced something like this on a previous project, where we ended up going with a combo of Esper and our own DSL written using ANTLR 3.0. Our biggest concern with using a fully funcional runtime, was sandboxing the user's code.
That said, I think Node.JS would be one of the easier ones to sandbox and it fits your needs. Maybe using something like this: http://gf3.github.com/sandbox/ or looking into Cloud9's code to see how they keep things safe. I also like that with Node.js you could give your users a pretty niffy editor using Ace.
Also check out this post: How to run user-submitted scripts securely in a node.js sandbox?

How can I create end-to-end tests for Mac (Cocoa) applications?

I have been reading a lot about test-driven development and decided that I want to give it a go on a small project. For reference, I am currently reading 'Growing Object-Oriented Software, Guided by Tests'.
I understand how to unit test my application and how to unit test certain parts of the UI as well, but I am struggling to set up end-to-end tests. For example, testing that a certain path through my whole application produces the correct output (this is my basic understanding of an end-to-end test).
It's not necessary to simulate click events, but it is necessary to have some sort of connection to the UI.
Am I right in thinking that I need a combination of "Logic" tests (test without launching the app), "Application" tests (test with launching the app) and the asynchronous functionality of something like GHUnit to accomplish this?
EDIT:
After reading some of the answers below, it sounds like I'm looking for functional end-to-end testing, but I think I should give an example of a test as I imagine it.
Start the application.
Call the login function with a test users credentials. (Note: doesn't necessarily need UI automation).
Verify a label on the window says "Logging In...".
After successfully verifying the user, verify the label now says "Welcome, Adam!".
KIF sounds like it could work, as it has steps to check changes in UI elements and it looks like there is a Mac OSX branch also. I'm sure I could also write a small class which constantly polls the UI for changes I expect and times-out after a certain time, but I'm wondering if this the right way to go about it.
However, perhaps I am trying to take what I am reading in 'Growing Object-Oriented Software, Guided by Tests' and trying to apply it too literally to Cocoa.
Another UPDATE:
So I've been reading the advice so far, checked the various places linked to and started to implement something whilst still referencing the book. I think what I'm really trying to get at is the Test-Driven-Development part. What stood out most in said book, was that they described what they wanted to happen from a users perspective first with acceptance tests.
I realise that solid unit testing will be necessary as soon as I start writing methods, but I was keen to write some high-level acceptance tests first, using some of the UI. I have started to write my own application "driver" class, using some similar methods to the GHAsyncTestCase ideas to help me accomplish this. Does this sound correct/useful/necessary?
I really appreciate all the comments so far and they have definitely helped me work out in my own head what I'm trying to do and what various areas of testing there are. I will finish up this question soon, as it is getting rather large, so any final advice is welcome!
I think the key thing that I got from "Growing Object Orientated Software" was to decouple as much as possible from the UI. Without code to look at it's harder to give suggestions but with your revision I'd think that separating the "verify a label says.." bit from the UI. What is setting this message, and can you just test for that event?
The more you can decouple from the UI the more you can unit-test (quicker and easier) rather than integrating other frameworks or drivers of UI elements.
You might be interested in Square's KIF framework: http://corner.squareup.com/2011/07/ios-integration-testing.html
It looks really cool for integration/UI testing.
I believe you can use the Accessibility features to script the UI. I'd check the WWDC 2011 videos for one entitled "Design Patterns to Simplify Mac Accessibility". They did something similar in 2010.
Based on your response to #Norman, I guess you're looking for recommendations that span both functional end-to-end as well as UI-based end-to-end but perhaps a UI automation framework might change your mind? Something intrusive like FoneMonkey might be helpful:
http://www.gorillalogic.com/fonemonkey
If that doesn't work for you, I'd be interested in knowing why & what "gap" you perceive in such UI driven tests versus code-based functional testing?

Ease A-B Testing / Beta Testing support within a framework

I'm looking for an implementation strategy to ease A-B testing / Beta testing. I don't see any code/plugin available for any framework. If not for a direct solution, let us at least brain-storm the requirements/expectations from the component:
There are already a few threads around my query..
Is there a PHP CMS with builtin A/B Testing Support?
Anyone got any good strategies for A/B testing with the Play Framework?
Beta Testing
As no one's answered this question, I'll attempt to do so.
Basically, I'm not sure if there's a directly useful connection between your PHP framework and your A/B testing needs. I think this is mainly because what you're testing can be almost anything: the colour of a conversion-sensitive button, a page layout, an entire registration funnel, etc. These don't inherently have anything to do with your PHP framework and there are lots of options for how you could do your testing.
Another issue is that you might not really know the parameters of what you're testing until you start testing. Your testing might lead you down a way that you didn't really even consider, so how could you have accounted for it in how you built the site? If you need a REALLY wide window for what you'll be testing, you're probably better off not building it at all and using some type of vapor/smoke-testing to get the basic concepts right first. Not everything can be subjected to testing and you'll still need subjectively-generated hypotheses as your test cases (and your testing will be only as good as your hypotheses).
If you have something very specific that you need to test repeatedly over time and want to build this flexibility into the system, then I'd look for the most obvious solution in the framework to make it happen. For example, if you're using Symfony and if you think that you'll need to test 50 different sidebar variations for a page over the course of 6 months, it probably makes sense to build it as a slot/component so you can build some logic around simplifying your testing and swap those sidebars with ease. I'm not sure why it would need to be anything more complicated than that.
Overall, I'd also add that the role of A/B testing should to guide your product to sell/convert/monetize/engage better. Unless you're building some type of a testing platform, I wouldn't over-think it. I tend to see that most sites fail to test sufficiently not because the system isn't flexible enough for various test cases but because top management won't give enough product/dev time for it, or because people aren't making enough use of their analytics packages to draw even the most basic of conclusions.
Hope that helps.
http://phpabtest.com/ looks like a pretty easy to use framework that comes free!

How to approach implementing an interface in a TDD way

So I'm trying to convert myself to a more test- and behaviour- driven approach to my development. It's good for me, and I've seen good results in the few projects I've used it for so far.
My current project is a FUSE-based filesystem - I want to add some functionality over basic filesystem access so FUSE seemed like a good fit. All I really need to do is implement a set of functions that fit the appropriate interface, wrap it up appropriately, and go.
However, test first, I remind myself. I've already written a set of cucumber features to lay out the basic expectations of how the overall app should work, so now it's time to get down to testing the innards.
Now, I could just write unit tests for each of the functions I need to write for the interface, and then get to coding the interface - but that doesn't seem overly test-driven to me. Sure the tests exist, but the interface is really what's driving things.
Am I going about this wrong? Or am I expecting too much?
Give me a "what-what" in the comments if you think this should be community wiki - I can't even decide if this has a right answer.
Step 1. What is one thing the interface must do? One thing.
Step 2. How will you prove it does that one thing?
Step 3. Write a test to prove the interface really does that one thing.
Step 4. Run the test -- it will fail. You haven't actually written the interface.
Step 5. Code the interface.
Step 6. The test will pass.
Move on to the next thing the interface must do.
This has little to do with the functions you've already designed. This is totally focused on externally visible feature the interface must have. It may turn out that your functions are the right thing. Or it may turn out that you over-engineered these functions. Or under-engineered them. The point is to drive your design from the things a component must do and the tests to prove what it must do.
Even if it's only focused on Ruby, The rspec book has a good introduction on the BDD cycle.
I want to add some functionality over basic filesystem access so FUSE seemed like a good fit
It is hard to develop fuse fs. Two main problems is VERY hard debugging and multi-threading. Also I had (and now have) problems with testing my fs. Maybe inotify will satisfy your requirements.

BDD GUI Automation

I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.