Cronet and ExperimentalCronetEngine - chromium

Are there any drawbacks to use ExperimentalCronetEngine instead of CronetEngine? We would like to experiment with the network quality estimator, which is only exposed in ExperimentalCronetEngine.

Every CronetEngine is really an ExperimentalCronetEngine, you can cast between them. APIs on CronetEngine will be supported forever, while ExperimentalCronetEngine APIs may come and go.

Related

Can you make a program similar to OpenBTS for CMDA using the gr-cdma library?

Can you create a program like OpenBTS for CDMA using this library?
You can, in theory. In practice, CDMA is but a very, very small aspect of ba very complex standard such as umts; all the other code necessary to create something that acts correctly on a standards-compliant network will largely outshadow just the CDMA-related code.
It might also be worth noting that communication standards put hard limits on things like reaction time - something that gr-cdma might not meet with your hardware.
A communication standard is much more than it's medium access mechanism.

Translations API

Are there any translation APIs that you could use to download and work with within the windows phone framework? This is without having to call to an outside service via the web.
Not at this point.
Why? Due to their nature, direct translation on the device would require a lot of effort, not because you're just translating word-for-word, but also have to consider language semantics and grammar. You could, of course, roll your own solution, but at that point your marginal benefit will be reduced to zero compared to using an existing web API.

If I write a framework that gets information from the Internet, should I make a degelate or use blocks?

Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions.
What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework.
Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented.
Thanks.
I really like blocks but I would be tempted to use a delegate protocol in this case. Network connections can fail in a large number of ways and their delegates tend to keep a fair amount of stateful information about them. I find that that maps well to a delegate protocol with a number of optional methods.
If you're providing a very simplified API for accessing network data then a success/failure pair of blocks might be sufficient. Personally I find that I have to deal with alot of different cases which use many delegate methods on a stateful delegate object. For example; should I retry failed connections immediately or later, does the relative priority of failed connections change, can I make us of a partial response, should I switch a connection to wifi when it becomes available, do I offer a user a chance to authenticate if prompted, do I display incremental progress in a connection? You could handle all of those with blocks but I find that I would rather have a delegate class managing the connection.
Without knowing more about what data you intend for your interface to fetch I don't know that I can be more specific but. I would be tempted to allow users of the API to manage their own connection state if possible.
It all depends on who your target audience is. If you want people writing apps for OS X 10.5 or iOS 3.x, then you need to use delegates. Otherwise, go ahead and use blocks.
It's quite a subjective question since both are valid options, but Apple seems to be shifting further towards using blocks for "throw-away" methods.
The main question would be your target audience.
Block are limited to Snow Leopard (and IOS 4? cant remember).
If you want your framework to be usable by previous operating systems, you can't use blocks.
If you're happy with os limitations, then go with blocks and NSOperationQueue, it's really good and simple to use.
Better, you could offer both options..
I would recommend using blocks, and if you do it right, you can support 10.5 at the same time.
Check out the open-source PLBlocks runtime, it allows you to seamlessly use blocks on both 10.5 and 10.6.

What is the best way of pulling json data in terms of performance?

Currently I am using HttpWebRequest to pull json data from an external site, and the performance was not good. Is wcf much better?
I need expert advice on this..
Probably not, but that's not the right question.
To answer it: WCF, which certainly supports JSON, is ultimately going to use HttpWebRequest at the bottom level, and it will certainly have the same network latency. Even more importantly, it will use the same server to get the JSON. WCF has a lot of advantages in building, maintaining, and configuring web services and clients, but it's not magically faster. It's possible that your method of deserializing JSON is really slow compared to what WCF would use by default, but I doubt it.
And that brings up the really important point: find out why the performance is bad. Changing frameworks is only an intelligible optimization option if you know what's slow, and, by extension, how doing something different would make it less slow. Is it the server? Is it deserialization? Is it network? Is it authentication or some other request overhead detail? And so on.
So the real answer is: profile! Once you know what the performance issue really is, you can make an informed decision about whether a framework like WCF would help.
The short answer is: no.
The longer answer is that WCF is an API which doesn't specify a communication method, but supports multiple methods. However, those methods are normally over SOAP which is going to involve more overheard than a JSON, and it would seem the world has decided to move on from SOAP.
What sort of performance are you looking for and what are you getting? It may be that you are simply facing physical limitations of network locations, in which case you might look towards making your interface feel more responsive, even if the data is sluggish.
It'd be worth it to see if most of the latency is just in reaching the remote site (e.g. response times are comparable to ping times). Or, perhaps, the problem is the time it takes for the remote site to generate and serve the page. If so, some intermediate caching might be best.
+1 on what Isaac said, but one thing I'd add is, if you do use WCF here, it'll internally use the HttpWebRequest in most places, so you're definitely not gaining performance at all. One way you may unintentionally gain in performance -- however -- is in how WCF recycles, reuses, pools, and caches most transport objects internally. So it ultimately goes back to Isaac's advice on profiling.

How much low-level stuff to expose in an API?

When designing the public API of a generic library, how much of the low-level stuff that's used internally should be exposed? On the one hand, users should not depend too heavily on implementation details, and too many low-level functions/classes might clutter the API. Therefore, the knee-jerk response might be "none". On the other hand, some of the low-level functionality might be useful to people, and exposing more of it can prevent abstraction inversion (the re-implementing of low-level constructs on top of high-level constructs).
Furthermore, exposing more low-level details could provide performance shortcuts. For example, let's say you have a function to find the median of an array. The principle of least surprise says that you should duplicate the array so that users of your API don't have to care that its implementation involves the side effect of reordering elements. Should you, in this case, note that median() costs a memory allocation and provide another function that bypasses the allocation, but will arbitrarily reorder the user's input?
What are some general guidelines for how much of this kind of detail to expose?
As little as is possible.
The more details you expose, the more likely a change will break a consumer.
Your API shouldn't allow callers to "break" anything by mucking up the state of the internals (e.g. reordering collections, etc.). To solve that problem, your exposed interfaces should be read only when necessary.
With respect to complexity, I lean far in the direction of simple, basic methods. I try very hard not to over-engineer anything with what I think will be needed down the road.
Write to today's requirements (maybe tomorrow's), but not beyond. You can always extend in the future. It's much harder to just drop things that you can't maintain anymore.
The unix way of doing it is to provide mechanisms, not policies. Just provide the right tools to do things (say, a knife), but try not to anticipate how they are going to be used (to peel apples or to sharpen pencils).
One way I've heard it expressed: Expose the what, but not the how.
The goal is to provide a useful and rich library for clients to use, without making them dependent of the library's internals. You want to be able to change the internals, without breaking the callers (as someone else already noted).
Writing a good API involves a certain amount of artful brinkmanship.