Memory Consumption by an app - WIndows phone 8.1 Emulator - windows-phone

I'm currently working in Web Scraping based windows phone 8.1 version. And I want this app to consume very low memory space. This is why I need to check how much memory it's taking when running. I can't find the option to see this memory log. Anyone has any idea how this can be done?

In Visual Studio 2013 there is a powerful tool for monitoring CPU, Memory usage and UI responsiveness go to Debug--->Performance and diagnostics. Select memory usage ans start diagnostics. See Below images.
Another option is to use windows phone developer Power tools 8.1 see below image see below image
Hope it helps!

The emulator in many cases may behave not as a real device. I'd recommend using Release version and test on a real device.
If you have Windows Phone 10 device, you may consider enabling the Device portal and monitoring your app performance and memory usage there:
http://mspoweruser.com/new-device-portal-in-windows-10-mobile-makes-it-super-easy-to-sideload-apps-pictures/
It even supports REST API in case you want to make some test automation.

Related

What sort of things are UEFI "applications" actually used for?

I'm interested in PC firmware programming, and am just studying the UEFI spec. To my surprise, it seems like a spec for an entire OS which is embedded in firmware. You can even write UEFI "applications", which run directly using the UEFI boot services, without any other OS present.
I've found blog posts which show how to create a "Hello, world!" application which can run in the UEFI preboot environment. This is... interesting, and bizarre at the same time. I'll run my "Hello, world" programs on a regular OS, thank you.
What kind of use cases are UEFI applications actually good for? Fancy boot configuration screens? Does any "real", commercially available PC firmware use UEFI applications to implement anything more than just boot loaders and boot configuration utilities?
Anything that isn't PEI/DXE/SMM core or driver is an application, so any "real" PC have them, because BIOS Setup is actually an UEFI application. Some vendors include various other apps like firmware updaters, diagnostic and troubleshooting utilities, etc. UEFI 2.4 makes possible to add your own application with a properly filled BootXXXX/KeyXXXX variable pair and then run it by pressing a key combination during POST.
Most console applications written in C can be compiled as UEFI application by using StdLib package of current EFI Development Kit and then run in UEFI shell.
Major examples of useful UEFI apps (besides bootloaders, shell and Linux kernel, of course) are Intel ME System Tools, Read Universal, Python 2.7 and many more.
Eventually, when legacy boot will not be available anymore, all currently useful DOS utilities must either be made UEFI applications or go extinct.
Despite many valuable answers here, because I wrote couple UEFI applications myself I will try to add my 2 cents. First, what is UEFI application to just give ground what we talking about:
UEFI Specificatin v2.5:
Section 2.1.1
The major differences between image types are the memory type that the firmware
will load the image into, and the action taken when the image’s entry point exits or returns. An
application image is always unloaded when control is returned from the image’s entry point.
Section 2.1.2
When the application returns from
its entry point, or when it calls the Boot Service EFI_BOOT_SERVICES.Exit(), the application
is unloaded from memory and control is returned to the UEFI component that loaded the application.
Groups of applications that make sense in UEFI:
Configuration tools - Configuration interface for Option ROMs (ie. for storage controllers), out of band management (ie. AMT configuration tools), manufacturer performance tweaking tools
Provisioning tools - used by administrators to preload specific BIOS setting, manually setting all options in BIOS setup would be inefficient
Diagnostics tools - mostly for tests that cannot be performed in OS (DRAM tests, full storage scan, storage R/W tests, etc.). In some districts specific diagnostics tools are required in UEFI BIOS, so those can be sold to government.
Security applications - HDD encryption/decryption, antivirus scanner and anti thief applications
BIOS capability enhancement - Power Over Ethernet extensions, DRAM discovery, patching and modification of system tables (SMBIOS, ACPI)
Display tools - for displaying complex animations while running, splash screen displaying
Bootloaders - this is special type of application, which can call EFI_BOOT_SERVICES.ExitBootServices() causing termination of all memory management and passing control to Operating System.
Note that very important feature of UEFI application is that it can be added to boot order and be executed each boot time. Also UEFI application do not have to be delivered with BIOS image it can be stored in connected device memory, which is common for Option ROM configuration tools.
Here it is an example of a full blown UEFI Pre-boot Application;
There are SED SSD/HDD drives. As soon as SSD/HDD loses its power it goes into locked state (hardware-based encryption) There is no way you can get access to drive's data and all partitions on the drive are no longer even visible. Only small read-only partition (ShadowMBR) is available. UEFI firmware boots an UEFI application from that only available partition (UEFI app is written on that partition during the initialization process and when the ownership of an SED is taken). It securely authenticate user and if credentials are valid it unlocks the drive. When the drive is unlocked Shadow MBR disappears and all partitions on the drive becomes available. Then the App chain-boots the installed OS.
So if you don't have credentials you cannot even boot the OS and you cannot access the data on the drive by any means.
Here's a couple of examples:
https://github.com/NikolajSchlej/CrScreenshotDxe
UEFI DXE driver to take screenshots from GOP-compatible graphic console (yes, you can make PNG screenshots of your BIOS and save them)
http://ruexe.blogspot.com/
RU.EFI is quite an advanced tool for debugging the BIOS
Well, there are the OS loaders - both the more heavyweight ones (Windows, GRUB, BSD Loader) and the "present a menu" ones (rEFInd, Gummiboot). Shim, which enables UEFI Secure Boot for Linux platforms, consists of an application as well as installing a protocol for use by other applications.
Then you have things like the Linux kernel, which when compiled with CONFIG_EFI_STUB becomes a valid UEFI application, with the awareness of booting itself.
And firmware updates can also be shipped as UEFI applications.
The UEFI shell itself is an application.
Then there are things like factory production testing utilities, development diagnosis tools, ...
Windows 7 - 8 have UEFI installer. I'm not fully aware of the details, but I'm pretty sure this new environment gives a lot more flexibility to the developers than traditional boot environment on DVD.
Some motherboards have "instant on" features that allows you to get to a desktop screen within a few seconds. This is usually a stripped down flavor of some linux that allows you to access a web browser and play music/video. ASUS have such boards.

Is there any major differences between Adobe AIR over Titanium

at first i thought with Titanium, i can develop for Mobile and Desktop over AIR on Desktop only, but a quick look at the AIR Site, i guess i am wrong.
Benefit from a consistent, flexible,
and visual development environment for
applications on multiple platforms and
devices such as smartphones,
smartbooks, tablets, netbooks, and
PCs.
so my question is are there any major differences of titanium over air that i shld be aware of?
if no, i guess now air maybe better documented and has the backing of a more recognized company? after working with titanium desktop for a while i felt abit helpless and the docs are not really helping much
There are a lot of subtle differences, of course, and there are advantages and disadvantages to working in either, but the largest difference is that Titanium can produce apps for the iPhone/iPad, and AIR can't (well, at least not conveniently).
AIR can produce iPhone apps that you can deploy using the ad-hoc provisioning, but you can't distribute via the app store.
I've got desktop apps on both and am making a mobile app right now. Titanium desktop will cut your dev time to 1/3 of the time you'll take jumping through AIRs various sandboxes and security measures. Best yet, the code I wrote for my Ti desktop app is all javascript with about 3 Ti API calls and can be taken anywhere. The AIR app is all mangled by the wild structure you have to use with AIR apps and 1 million api calls.
The downside to Ti desktop is the API isn't as fully featured, and the Ti team pushes 4 times as many updates for the mobile API as the desktop API. Also, you won't be able to port your app from desktop to mobile easily as they are two different structures and APIs.
That said, developing for iPhone and Android on Ti is the same exact process and that won't happen on AIR.
Lots to weigh, but for my money it's Ti over AIR.
Hope this helps.

Using laptop as a second programming monitor

The joys of multimonitor programming are countless, I think there are about 5 blog posts on Coding Horror on the topic alone!
I often code in Windows on my main machine, and have my Mac laptop set up to the side. I use the Mac both to compile Mac builds but also as my "reference web browser". There's no KVM or anything.
However a casual conversation at a conference led me to the question, could I use two independent machines to share windows? Literally move some windows from one machine to another, so I could use one PC's display as "overflow" from the other.
Some googling suddenly shows that this is possible in some situations for sure:
Synergy and Maxivista
My question is whether any programmers have tried such a setup. We have unique needs especially with multiple text windows and editors, and this kind of tool may be a huge win or a huge hassle.
This solution feels like a combination of easy KVM switching AND multiple monitors.. it sounds like a programming dream! So advice or especially reports of actual experience in a programming environment would be greatly useful before I invest in the rather complex setup.
Followup:
Sounds like I'm asking for something that doesn't exist! It's kind of combination of a software KVM and VNC. But the VNC would need to break out the app windows and allow individual manipulation (like that maxivista commercial tool, which is Vista only).
Thanks for all the feedback. Looks like there's demand for a cool app if anyone has the drive to be first in this new nich!
Synergy doesn't allow you to move windows between machines (that would require a silly amount of work behind the scenes), but it does allow you to share a keyboard and mouse between two machines so they "appear" to be all one machine, but actually run separately.
I personally use Input Director, as I found it more stable than Synergy. I have my laptop with an external monitor to the right, and my desktop to the left as an Input Director slave. My desktop runs a different O/S and is basically my guinea pig box for testing stuff and for anything I need to keep running when I leave the office. Cut + paste is pretty seamless, so I can quite happily fire up an RDP session to a server on my desktop, and cut+paste SQL scripts from that to my laptop.
It's a very useful thing to have if you have a few physical boxes and monitors kicking around :)
I've actually managed to use spare notebook as a second monitor to Desktop PC. This allows to move windows to second PC, but not vise-versa.
Solution would work basically with any OS.
The only requirement is a spare VGA (or DVI-I/DVI-A) port on server PC.
Make a dummy VGA plug http://www.overclock.net/t/384733/the-30-second-dummy-plug
This will also work for DVI-I/DVI-A port + DVI-VGA adapter
Detect virtual monitor with your OS. Monitor will be detected as very generic monitor, so you can set up any resolution. Set it to slave PC resolution.
Use any remote control software to connect from slave to server PC. Set it to display only "virtual" monitor.
That's all. Your slave PC is a second monitor for server PC.
I've used this on Windows 7 + TeamViewer. I've additionally set up Mouse Without Borders (Microsoft Synergy analog) to be able to use slave PC with same mouse&keyboard, though this is not required if you intend to transform it to monitor-only.
Xdmx - Distributed Multihead X Project (linux only)
Provides native X display on external machines, no VNC cons.
The following is not exactly what you want, but pretty close:
You can start a VNC server on the Windows machine, which will let you "export" its graphical screen.
Then, unplug the monitor from the Windows machine and use it as external laptop monitor instead, with your Mac laptop.
There, on your Mac, you just connect to the VNC session using Chicken of the VNC, which will give you the graphical screen content of the Windows machine as a Mac window (interactively, so you can actually control the windows machine as if you were working on it directly). You can put that on the external monitor, and you can also put other windows there, so you really have a shared environment.
I believe this solution also lets you copy and paste content from the Windows screen to Mac windows and vice versa.
I use MaxiVista on WinXP while programming. It works fantastically and lets me add a third screen to my multi-monitor configuration.
There is hope, here for windows users: http://virtualmonitor.github.io/ Looks like a work-in-progress and only supports windows 2000 - windows 7, but he's looking for help with windows 7 - 8.
Unfortunately, synergy doesn't allow moving windows across screens currently. It only forwards mouse&keyboard events from one set of physical devices to different computers.
Yes, and I love it. It allows you get past 2 screens on a laptop, and really I find 3 a great amount.
If your main machine is a Mac you want ScreenRecycler. You can then use monitors on other Mac, Windows, and Linux machines (anything with a VNC client). You will want something better than the Mac's crappy windows management though. I suggest Many Tricks' Moom and Witch.
On Windows, as #LachlanG said, MaxiVista works great. And it supports adding monitors from Windows, Mac, and Linux machines.
I am reusing my old laptop as a second monitor to see the live preview while coding. I am using SpaceDesk, which is free.
I use barrier and open source fork of synergy. Its a little hard to use but works really well. (To find it just search google for 'barrier github').

Recommendations for automated testing tools for Windows CE and PDA devices

Is anyone out there aware of any good or even reasonable tools for automated testing on the Windows CE / mobile platforms. Potential tools that I am aware of include TestQuest, Countdown, SOTI pocket controller, and Eggplant. Are there any more that I have missed?
Alternatively, is anyone aware of a VNC or remote display tool for Windows mobile that replicates the Windows visual object hierarchy on the PC, rather than displaying the entire device as a single bitmap? If this could be done, mainstream desktop automation tools could be applied to Windows mobile.
N.B. I have already read this related question which is useful, but am looking for a viable off the shelf alternative. This post is following up on a number of related posts in the PDA/Embedded section of SQAforums.
I realize that your question is directly "are there tools to do the automated testing on CE", but have you considered perhaps aiming your automation at a version of the app which can be accessed from a standard desktop environment? In this way, you are open to all of the standard automation tools.
For example, I have worked on a few projects where we needed to perform automated testing for the device. In all cases, the RF device was really just a web browser connecting to a web based app. The same URL and simple forms could be plugged into a standard desktop browser and be automated by any of the usual automation toolsets. Automation never replaces manual testing, so what we did on those projects was automate regression testing of the same web interface that was used by the RF devices, but still do some sanity manual testing directly on the devices.
Also, with regards to the VNC/bitmap issue, I've been down that road before and agree that it is a nightmare. Using standard desktop UI automation on a VNC bitmap is not only unreliable and unmaintainable, but slow - in most tools, the CPU churns away searching the entire bitmap from top left to bottom right for the desired image. Really really slow.
Check Hopper, a test tool for Windows Mobile.

Pros and Cons of Developing on a VM on a PC

I recently build myself a semi beef up PC (Q9450, 8GB DDR2 1066, 1TB HDD, Dual 8600GT, Vista Ultimate and Dual 22' Monitors) and I'm evaluating whether i should develop on a VPC/VMWare session on top of Vista or not?
One benefit I can see is that I can run the same VM on my Vista laptop so my development environment is the same on any of my machines. I also plan on purchasing a MBP before the end of the year as well.
Found a couple of articles online that semi-help Here
Any other thoughts would be really appreciated?
For webdevelopment I like to have the serverpart separeted out into a VM. My current setup is a Macbook Pro with several Debian VM's inside. I like the isolation aspect of it. I can try new software on the servers and have the ability to revert them back if something is messed up.
I do the programming via network-share (samba) in Textmate on the host system.
Another advantage of a VM is having a clean installed base. I use my desktop and laptop for lots of things aside from development. You never know when a piece of software you install is going to conflict, or if the little tweaks and what not you play around with are going to trash your OS. Reinstalling/configuring all your tools so they are exactly the way you want them can take quite some time. If you have a backup of your Development VM Image you can mess up your PC as much as you want but still be able to code without downtime. It also allows you to run Win/Visual Studio/Etc on a box that you would otherwise prefer Linux or MacOS on.
You can also make multiple copies of the same Image and use each one for a separate project.
Being able to transition between a laptop/desktop/server/remote connection, and always be in the same environment is also very helpful.
One problem I found (at least when using VMWare Server) is that no matter how fast your machine is, the screen refresh rate is still around ~30hz. That makes for a slightly unpleasant experience after using it for a while.
Where I'm working at now I use a VM for all of my development because I don't have admin rights to my base copy of XP.
Pros:
I like using a VM's because it give you some flexibility - you can switch between machines - have programs running on both and have a cool environment to work on.
Cons:
You have to boot up multiple operating systems. This takes time, memory and resources.
Clipboard operations on VM's can be interesting at times. Sometimes copying to clipboard does not work or gets mixed up between VM's. (Using VMWare).
File operations can be interesting when you plug in USB drives and other external devices. VM's sometimes do not see the devices, sometimes it does.
If your VM image become corrupt - you can easily loose everything in it.... unless it is backed up.....
It's great for presenting development talks, you can revert to a snapshot and give the talk from the exact same starting point each time.
Bulk-up your RAM on your future MacBookPro if VMWare will be used. I haven't (yet) and the performance with several other (mac-side) apps open really starts to feel sluggish.
All the best.
I was doing some work with Visual Studio recently with a Windows XP vm on Linux and somehow the guys who made the vm (vmware) made the windows machine actually run faster. We did some time tests to make sure and it wasn't major, but a few things (autocomplete for example) really did pop up faster.
If you are on Windows, Virtual PC is pretty decent for development work. VMWare Virtual Server is not really designed for use as a desktop and you will get very tired of it with any prolonged use. Sun's VirtualBox is another option competing with Virtual PC. VMWare has a workstation product but it is not free.
Typically, I do development on the real desktop (non-virtual) and then deploy or test to virtual machines which I can snapshot and roll back easily.
For a long time, we were developing on very early versions of Visual Studio 2005 and the associated .Net bits that went along with it. To protect our real machines from the various problems associated with pre-release software, we did all of our development work inside virtual machines. It worked amazingly well. I've been considering moving back to that model as it makes upgrading the physical hardware a snap (not to mention making it easier to deal with hardware failures by just replacing the entire machine): you just copy the VM image over.
On my current machine (A Core2Duo with 4GB of RAM), the performance drop when running one VM is almost not noticeable. Running two VMs, however, is painful.
I also can't figure out how to get VMWare Server to work across two monitors well.
I wouldnt want to develop in a VM so much as test things in a VM. For instance, it might be nice to set up a couple VM's to emulate an n-tier architecture, or a client-server setup or finally simply to test code on multiple OSs
It depends what you are developing and in what language.
VM's tend to take a fairly hard hit on disk access, so compiling may slow down significantly, especially for large C/C++ projects. Not sure if this would be such an issue with .NET/Java.
If you are doing anything that is graphics intensive (3D, video, etc) then I would steer clear of a VM too.
I don't know if it is so useful as a development platform unless you are doing something that ties into software you don't want to have installed on your regular working machine or that needs to work around a certain event that you need to be able to reset on a regular basis. It can also be handy when you are working with code that risks crashing your computer as it will at least only crash your VM.
It is brilliant for testing different configurations and setups- working with installers and so on, that is where virtualisation really shines as far as I am concerned, being able to roll things back whenever you need to and run through stuff repeatedy is amazingly useful for identifying problems before your end users run into them.
While doing development at home, I have to VPN into my company to be able to use the collaborative tools that are on the intranet. I also have a desktop + laptop that are hooked together through Synergy.
The problem that I have is that our VPN software wants things to be so secure that it will force all network routing through the VPN gateway -- even if I'm using additional NICs to network my desktop and laptop through a separate private network. The end result is that I can't use Synergy between my desktop and laptop and VPN into my company at the same time.
The solution suggested to me by a co-worker was to setup a VM instance on my desktop and use that for all my VPN needs. Works like a charm!
Speaking from personal experience developing java in an Ubuntu VM on Windows 7, I've found this to be quite productive. Mainly because my local IT support on the ground supports Windows 7, so I can do things like access all the local file shares and printers in Windows, and then config my Ubuntu VM to my heart's content.
Huge productivity benefits around remote access and desktop sharing. Windows allowed me to very quickly and easily use tools like logmein.com and join.me to access my machine from home and to desktop share the VM with other people in the company (both work seamlessly with the VM in a nearly full screen window). Neither of these services are supported on Linux, and I wouldn't want to deal with all the associated VNC/X setup and network config on Ubuntu.
My machine is fairly beefy. Quad core, with 16Gb RAM - 8Gb for the VM. Java dev in the VM is pretty quick.