get available memory in winrt - windows-8

I want to plot a larger amount of data within an metro application and I need to buffer this. To find out how far I can buffer it would be great to know how much memory is still available (to my app), this shouldn't inlclude virtual memory.
Is there any way in a metro app to get this Information? I only found GlobalMemoryStatusEx, but that can only be used in desktop apps
Thank you

I just had to deal with this and got hold of the right people at Microsoft to answer this. Unfortunately the answer was : No you can't do that, except use the restricted calls that you found but using those prevents you from getting certified for publication in the store.

How about just trying to allocate a lot of memory in chunks. When it first fails - add up the sizes of the chunks and either release them or use them for your operations.

Related

Program won't start before stop the program frist?

I'm a newbie using LabView for my project. So I'm developing a program that gathers data from sensors that attach in the DAQmx board and also a spectrometer from STS-VIS ocean optic. At the first developing, I combine both devices in one loop inside the same flat structure, but I got an error saying: "The application is not able to keep up with the hardware acquisition." I cannot get the data showing on the graph for both devices, but it was just fine if I run it separately. And I found the solution saying that I need to separate both devices in a different while loop process because it may have different buffer size (?). I did it and it worked that all the sensors are showing in each graph. But the weird thing is, I need to stop the program first at the first run, then run it again for the second time for getting the graph showing in the application. Can anyone tell me what I did wrong and give me a solution? Due to the project rule I cannot share my Vi here publicly, but if anyone interested to help, I'd like to share it personally. Thank you.
you are doing right thing but you have to understand how Data acquisition work in LabVIEW and hardware.
you can increase hardware buffer Programmatically using property node or try to read fast as possible then you dont need two separate loop.
NI
I work currently with a NI DAQmx device too and became desprate using LabView because there is not good documentation and/or examples. Then I started to use Python which I found more intuitively. The only disadvantage is that the userinterface is not so easily generated, but for this one can use the QT Designer (open source programm avaiable online).

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Data usage from any application

I want to read how much data from 3G every app uses. Is this is possible in iOS 5.x ? And in iOS 4.x? My goal is for example:
Maps consumed 3 MB from your data plan
Mail consumed 420 kB from your data plan
etc, etc. Is this possible?
EDIT:
I just found app doing that: Data Man Pro
EDIT 2:
I'm starting a bounty. Extra points goes to the answer that make this clear. I know it is possible (screen from Data Man Pro) and i'm sure the solution is limited. But what is the solution and how to implement this.
These are just hints not a solution. I thought about this many times, but never really started implementing the whole thing.
first of all, you can calculate transferred bytes querying network interfaces, take a look to this SO answer for code and a nice explanation about network interfaces on iOS;
use sysctl or similar system functions to detect which apps are currently running (and for running I mean the process state is set to RUNNING, like the ps or top commands do on OSX. Never tried I just suppose this to be possible on iOS, hoping there are no problems with app running as unprivileged user) so you can deduce which apps are running and save the traffic stats for those apps. Obviously, given the possibility to have applications runnning in background it is hard to determine which app is transferring data.
It also could be possible to retrieve informations about network activity per process/app like nettop does on OSX Lion, unfortunately nettop uses the private framework NetworkStatistics.framework so you can't dig something out it's implementation;
take into account time;
My 2 cents
No, all applications in iOS are sandboxed, meaning you cannot access anything outside of the application. I do not believe this is possible. Neither do I believe data-traffic is saved on this level on the device, hence apple would have implemented it in either the network page or the usage page in Settings.app.
Besides that, not everybody has a "data-plan". E.g. in Sweden its common that data-traffic is free of charge without limit in either size or speed.

boost::serialization high memory consumption during serialization

just as the topic suggests I've come across a slight issue with boost::serialization when serializing a huge amount of data to a file. The problem consists of the memory footprint of the serialization part of the application taking around 3 to 3.5 times the memory of my objects being serialized.
It is important to note that the data structure I have is a three dimensional vector of base class pointers and a pointer to that structure. Like this:
using namespace std;
vector<vector<vector<MyBase*> > >* data;
This is later serialised with a code analog to this one:
ar & BOOST_SERIALIZATION_NVP(data);
boost/serialization/vector.hpp is included.
Classes being serialised all inherit from "MyBase".
Now, since the start of my project I've used different archives for serialization from typical binary_archive, text, xml and finally polymorphic binary/xml/text. Every single one of these acts exactly the same way.
Typically this wouldn't be a problem if I had to serialize small amounts of data but the number of classes I have are in the milions (ideally around 10 milion) and the memory usage as I've been able to test it shows consistently that the memory allocated by boost::serialization part of the code is around 2/3 of the application whole memory footprint while writing the file.
This amounts to around 13.5 GB of RAM taken for 4 milion objects where the objects themselves take 4.2GB. Now this is as far as I've been able to take my code since I don't have access to a machine with more than 8GB of physical RAM. I should also note that this is a 64bit application being run on a Windows 7 professional x64 edition but the situation is similar on an Ubuntu box.
Anyone has any idea how I would go about troubleshooting this as it is unacceptable for me to have such high memory requirements for an application that will not use as much memory while running as it does while serializing.
Deserialization isn't as bad, as it allocates around 1.5 times the needed memory. This is something I could live with.
Tried turning tracking off with boost::archive::archive_flags::no_tracking but it acts exactly the same.
Anyone have any idea what I should do?
Using valgrind I found that the main reason of memory consumption is a map inside the library to track pointers. If you are certain that you do not need pointer tracking ( it means you are sure that there is no pointer aliasing) disable tracking. You can find here the main concepts of disable tracking. In short you must do something like this:
BOOST_CLASS_TRACKING(vector<vector<vector<MyBase*> > >, boost::serialization::track_never)
In my question I wrote a version of this macro that you could disable tracking of a template class. This must have a significant impact on your memory consumption.
Also notice that there are pointers inside any containers If you want tracking never you must disable tracking of them too. Currently I could not find any way to do this properly.

How to estimate size of Windows CE run-time image

I am developing an application, and need to estimate how much resources (RAM and ROM) it will need to run on a device. I have been looking online, but couldn't find any good tip on how to do this.
The system in question is an industrial system. The application itself will need to have a .NET Compact framework, and following components besides Windows CE Core: SYSGEN_HTTPD (Web Server), SYSGEN_ATL (Active Template Libraries), SYSGEN_SOAPTK_SERVER (SOAP Server), SYSGEN_MSXML_HTTP (XML/HTTP), SYSGEN_CPP_EH_AND_RTTI (Exception Handling and Runtime Type Information).
Tx
There really is not way to estimate this, becasue application behavior and code can have wildly different requirements. Something that does image manipulation is going to likely require more RAM than a simple HMI, but even two graphical apps that do the same thing could be vastly different based on how image algorithms and buffer sizes are set up.
The only way to greally get an idea is to actually run the application and see what the footprint looks like. I would guess that you're going to want a BOM that includes at least 64MB or RAM and 32MB of flash at a bare minimum. Depending on the app, I'd probably ask for 128MB of RAM. Flash would greatly depend on what the app needs to do.
Since you are specifying core OS components and since I assume you can estimate your own application's resources, I assume you ask for an estimation of the OS as a whole.
The simplest way to have an approximation is to build an emulator image (CE6 has an arm one) and it should give you a sense. The difference with the final image will be with the size of the drivers for the actual platform you will use.