Why does runtime reduce when we use functions instead of two nested for loops - time-complexity

I've observed that runtime of a program which has two nested loops is greater then runtime of same code that uses a function instead of inner loop. Why does that happen when we are performing same logic in both cases.
Here is the sample code to print primes in range using function:
public class Main
{
public static void main(String[] args) {
int n;
boolean flag;
Scanner input = new Scanner(System.in);
n=input.nextInt();
long st=System.nanoTime();
for(int i=2;i<n;i++)
{
if(isPrime(i))
{
System.out.printf("%d ",i);
}
}
long et = System.nanoTime();
System.out.println();
System.out.printf("%.4f",(et-st)/1000000000.0);
}
public static boolean isPrime(int n)
{
for(int i=2;i<=Math.sqrt(n);i++)
{
if(n%i==0)
{
return false;
}
}
return true;
}
}
The time taken to run this program is 0.017s for input of 1000.
Now if i use a nested for loop in same code instead of function the execution time became nearly 1.26s.
So my question is why does nesting of loops increase time when we are performing same number of operations in both cases.

Related

Chronicle Queue performance when using ByteBuffer

I'm using Chronicle Queue as a DataStore which will be written to once but read many many times. I'm trying to get the best performance (time to read x number of records).
My data set (for my test) is about 3 million records , where each record consists of a bunch of longs and doubles. I initially started with "Highest-level" API which was obviously slow , then self-describing" data as mentioned in this Chronicle Documentation and finally using "raw data" which gave the best performance.
Code as below:(Corresponding write() code is omitted for brevity)
public List<DomainObject> read()
{
final ExcerptTailer tailer = _cq.createTailer();
List<DomainObject> result = new ArrayList<>();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
Wire wire = ctx.wire();
if(wire != null) {
wire.readBytes(in -> {
final long var1= in.readLong();
final int var2= in.readInt();
final double var3= in.readDouble();
final int var4= in.readInt();
final double var5= in.readDouble();
final int var6= in.readInt();
final double var7= in.readDouble();
result.add(DomainObject.create(var1, var2, var3, var4, var5, var6, var7);
});
}else{
return result;
}
}
}
}
However to improve my Application performance ,I started using ByteBuffer instead of a "DomainObject" and thus modified by read method as below:
public List<ByteBuffer> read()
{
final ExcerptTailer tailer = _cq.createTailer();
List<ByteBuffer> result = new ArrayList<>();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
Wire wire = ctx.wire();
if(wire != null) {
ByteBuffer bb = ByteBuffer.allocate(56);
wire.readBytes(in -> {
in.read(bb); });
result.add(bb);
}else{
return result;
}
}
}
}
Above code listing took an average of 550 ms vs 270ms for the first listing.
I also tried using Bytes.elasticByteBuffer as mentioned in this post but it was way slower
I'm guessing the second code listing is slower because it has to loop through the entire byte array.
So my question is - Is there a more performant way to read bytes from Chronicle Queue into a ByteBuffer? My data will always be 56 bytes with 8 bytes for each data item.
I suggest you use Chronicle-Bytes instead of raw ByteBuffer. Chronicle's Bytes class is a wrapper on top of ByteBuffer but much easier to use.
The problem with your code is you create a bunch of objects instead of stream-processing. I suggest you read with something like:
public void read(Consumer<Bytes> consumer) {
final ExcerptTailer tailer = _cq.createTailer();
for (; ; ) {
try (final DocumentContext ctx = tailer.readingDocument()) {
if (ctx.isPresent()) {
consumer.accept(ctx.wire().bytes());
} else {
break;
}
}
}
}
And your writing method could look like:
public void write(BytesMarshallable o) {
try (DocumentContext dc = _cq.acquireAppender().writingDocument()) {
o.writeMarshallable(dc.wire().bytes());
}
}
And then your consumer could be like:
private BytesMarshallable reusable = new BusinessObject(); //your class here
public accept(Bytes b) {
reusable.readMarshallable(b);
// your business logic here
doSomething(reusable);
}

What I do wrong with algorithm to compare samples?

I want compare and recognize two sound streams. I created my own algorithm, but its not working exactly like i would like. I trying for example, compare a few letters "A,B,C" with "D,E,F" or words "facebook" with "music" and algorithm give a true value for this comparing, but these are not the same words. My algorithm is so imprecise or it is cause of quality of sounds recorded using a microphone from laptop?
My concept of comparing algorithm:
Im taking for example 100 samples from one stream ( it can be in the middle of track) and im checking in loop every piece of second stream in specified way: first 0-99 samples , 1-100, 2-101 etc.
My program have about few tracks to compare with one input track, so my algorithm could get the best solution ( the most similar sample in track ) from every track Unfortunately it is getting wrong results.
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.IO;
using System.Runtime.CompilerServices;
using System.Windows;
using Controller.Annotations;
using NAudio.Wave;
namespace Controller.Models
{
class DecompositionOfSound
{
private int _numberOfSimilarSamples;
private String _stream;
public string Stream
{
get { return _stream; }
set { _stream = value; }
}
public int IloscPodobnychProbek
{
get { return _numberOfSimilarSamples; }
set { _numberOfSimilarSamples = value; }
}
public DecompositionOfSound(string stream)
{
_stream = stream;
SaveSamples(stream);
}
private void SaveSamples(string stream)
{
var wave = new WaveChannel32(new WaveFileReader(stream));
Samples = new byte[wave.Length];
wave.Read(Samples, 0, (int) wave.Length);
}
private byte[] _samples;
public byte[] Samples
{
get { return _samples; }
set { _samples = value; }
}
}
class Sample: INotifyPropertyChanged
{
#region Cechy
private IList<DecompositionOfSound> _listoOfSoundSamples = new ObservableCollection<DecompositionOfSound>();
private string[] _filePaths;
#endregion
#region Property
public string[] FilePaths
{
get { return _filePaths; }
set { _filePaths = value; }
}
public IList<DecompositionOfSound> ListaSciezekDzwiekowych
{
get { return _listoOfSoundSamples; }
set { _listoOfSoundSamples = value; }
}
#endregion
#region Metody
public Sample()
{
LoadSamples(); // przy każdym nowym nagraniu należy zaktualizować !!!
}
public void DisplayMatchingOfSamples()
{
foreach (var decompositionOfSound in ListaSciezekDzwiekowych)
{
MessageBox.Show(decompositionOfSound.IloscPodobnychProbek.ToString());
}
}
public DecompositionOfSound BestMatchingOfSamples()
{
int max=0;
DecompositionOfSound referenceToObject = null;
foreach (var numberOfMatching in _listoOfSoundSamples)
{
if (numberOfMatching.IloscPodobnychProbek > max)
{
max = numberOfMatching.IloscPodobnychProbek;
referenceToObject = numberOfMatching;
}
}
return referenceToObject;
}
public void LoadSamples()
{
int i = 0;
_filePaths = Directory.GetFiles(#"Samples","*.wav");
while (i < _filePaths.Length)
{
ListaSciezekDzwiekowych.Add(new DecompositionOfSound(_filePaths[i]));
i++;
}
}
public void CheckMatchingOfWord(byte[] inputSound,double eps)
{
foreach (var probka in _listoOfSoundSamples)
{
CompareBufforsOfSamples(inputSound, probka, eps);
}
}
public void CheckMatchingOfWord(String inputSound,int iloscProbek, double eps)
{
var wave = new WaveChannel32(new WaveFileReader(inputSound));
var samples = new byte[wave.Length];
wave.Read(samples, 0, (int)wave.Length);
var licznik = 0;
var samplesTmp = new byte[iloscProbek];
while (licznik < iloscProbek)
{
samplesTmp[licznik] = samples[licznik + (wave.Length >> 1)];
licznik++;
}
foreach (var probka in _listoOfSoundSamples)
{
CompareBufforsOfSamples(samplesTmp, probka, eps);
}
}
private void CompareBufforsOfSamples(byte[] inputSound, DecompositionOfSound samples, double eps)
{
int max = 0;
for (int i = 0; i < (samples.Samples.Length - inputSound.Length); i++)
{
int counter = 0;
for (int j = 0; j < inputSound.Length; j++)
{
if (inputSound[j] * eps <= samples.Samples[i + j] &&
(inputSound[j] + inputSound[j] *(1 - eps)) >= samples.Samples[i + j])
{
counter++;
}
}
if (counter > max) max = counter;
}
samples.IloscPodobnychProbek = max;
}
#endregion
#region INotifyPropertyChange
public event PropertyChangedEventHandler PropertyChanged;
[NotifyPropertyChangedInvocator]
protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}
#endregion
}
During comaparing all of sound samples, algorithm finds the soundtrack which have the highest number of matching samples, but it isnt correct record. Do my comparison of the two records make sense and how i can fix it to get expected result. Would you like to help me find solution of this problem ? Sorry for my English.
Kind Regards
You simply cannot do sample-level comparisons on recordings to determine your match. Even given two recordings on the same computer of the same word spoken by the same person - ie: every detail exactly the same - the recorded samples will differ. Digital audio is like that. It can sound the same, but the actual recorded samples will not match.
Speech To Text is not simple, and neither is voice recognition (ie: checking the identity of a person from their voice).
Instead of samples you need to examine the frequency profiles of the recordings. Various sounds in natural speech have distinctive frequency distributions. Sibilants - the s sound - have a wide distribution of higher frequencies for instance, so are easy to spot - which is why they used to use sibilant detection for the old yes/no response detection on phone systems.
You can get the frequency profile of a waveform by using a Fast Fourier Transform on a block of samples. Run through the audio stream and do a series of FFTs to get a 2D map of the frequencies of the waveform, then look for interesting things like sibilants (lots of high frequency, very little low frequency).
Of course you could just use one of the web-based Speech to Text APIs.

Check for array index to avoid outofbounds exception

I'm still very new to Java, so I have a feeling that I'm doing more than I need to here and would appreciate any advise as to whether there is a more proficient way to go about this. Here is what I'm trying to do:
Output the last value in the Arraylist.
Intentionally insert an out of bounds index value with system.out (index (4) in this case)
Bypass the incorrect value and provide the last valid Arraylist value (I hope this makes sense).
My program is running fine (I'm adding more later, so userInput will eventually be used), but I'd like to do this without using a try/catch/finally block (i.e. check the index length) if possible. Thank you all in advance!
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
public class Ex02 {
public static void main(String[] args) throws IOException {
BufferedReader userInput = new BufferedReader(new InputStreamReader(
System.in));
try {
ArrayList<String> myArr = new ArrayList<String>();
myArr.add("Zero");
myArr.add("One");
myArr.add("Two");
myArr.add("Three");
System.out.println(myArr.get(4));
System.out.print("This program is not currently setup to accept user input. The last printed string in this array is: ");
} catch (Exception e) {
System.out.print("This program is not currently setup to accept user input. The requested array index which has been programmed is out of range. \nThe last valid string in this array is: ");
} finally {
ArrayList<String> myArr = new ArrayList<String>();
myArr.add("Zero");
myArr.add("One");
myArr.add("Two");
myArr.add("Three");
System.out.print(myArr.get(myArr.size() - 1));
}
}
}
Check for array index to avoid outofbounds exception:
In a given ArrayList, you can always get the length of it. By doing a simple comparison, you can check the condition you want. I haven't gone through your code, below is what i was talking-
public static void main(String[] args) {
List<String> list = new ArrayList<String>();
list.add("stringA");
list.add("stringB");
list.add("stringC");
int index = 20;
if (isIndexOutOfBounds(list, index)) {
System.out.println("Index is out of bounds. Last valid index is "+getLastValidIndex(list));
}
}
private static boolean isIndexOutOfBounds(final List<String> list, int index) {
return index < 0 || index >= list.size();
}
private static int getLastValidIndex(final List<String> list) {
return list.size() - 1;
}

How can I use Lucene's PriorityQueue when I don't know the max size at create time?

I built a custom collector for Lucene.Net, but I can't figure out how to order (or page) the results. Everytime Collect gets called, I can add the result to an internal PriorityQueue, which I understand is the correct way to do this.
I extended the PriorityQueue, but it requires a size parameter on creation. You have to call Initialize in the constructor and pass in the max size.
However, in a collector, the searcher just calls Collect when it gets a new result, so I don't know how many results I have when I create the PriorityQueue. Based on this, I can't figure out how to make the PriorityQueue work.
I realize I'm probably missing something simple here...
PriorityQueue is not SortedList or SortedDictionary.
It is a kind of sorting implementation where it returns the top M results(your PriorityQueue's size) of N elements. You can add with InsertWithOverflow as many items as you want, but it will only hold only the top M elements.
Suppose your search resulted in 1000000 hits. Would you return all of the results to user?
A better way would be to return the top 10 elements to the user(using PriorityQueue(10)) and
if the user requests for the next 10 result, you can make a new search with PriorityQueue(20) and return the next 10 elements and so on.
This is the trick most search engines like google uses.
Everytime Commit gets called, I can add the result to an internal PriorityQueue.
I can not undestand the relationship between Commit and search, Therefore I will append a sample usage of PriorityQueue:
public class CustomQueue : Lucene.Net.Util.PriorityQueue<Document>
{
public CustomQueue(int maxSize): base()
{
Initialize(maxSize);
}
public override bool LessThan(Document a, Document b)
{
//a.GetField("field1")
//b.GetField("field2");
return //compare a & b
}
}
public class MyCollector : Lucene.Net.Search.Collector
{
CustomQueue _queue = null;
IndexReader _currentReader;
public MyCollector(int maxSize)
{
_queue = new CustomQueue(maxSize);
}
public override bool AcceptsDocsOutOfOrder()
{
return true;
}
public override void Collect(int doc)
{
_queue.InsertWithOverflow(_currentReader.Document(doc));
}
public override void SetNextReader(IndexReader reader, int docBase)
{
_currentReader = reader;
}
public override void SetScorer(Scorer scorer)
{
}
}
searcher.Search(query,new MyCollector(10)) //First page.
searcher.Search(query,new MyCollector(20)) //2nd page.
searcher.Search(query,new MyCollector(30)) //3rd page.
EDIT for #nokturnal
public class MyPriorityQueue<TObj, TComp> : Lucene.Net.Util.PriorityQueue<TObj>
where TComp : IComparable<TComp>
{
Func<TObj, TComp> _KeySelector;
public MyPriorityQueue(int size, Func<TObj, TComp> keySelector) : base()
{
_KeySelector = keySelector;
Initialize(size);
}
public override bool LessThan(TObj a, TObj b)
{
return _KeySelector(a).CompareTo(_KeySelector(b)) < 0;
}
public IEnumerable<TObj> Items
{
get
{
int size = Size();
for (int i = 0; i < size; i++)
yield return Pop();
}
}
}
var pq = new MyPriorityQueue<Document, string>(3, doc => doc.GetField("SomeField").StringValue);
foreach (var item in pq.Items)
{
}
The reason Lucene's Priority Queue is size limited is because it uses a fixed size implementation that is very fast.
Think about what is the reasonable maximum number of results to get back at a time and use that number, the "waste" for when the results are few is not that bad for the benefit it gains.
On the other hand, if you have such a huge number of results that you cannot hold them, then how are you going to be serving/displaying them? Keep in mind that this is for "top" hits so as you iterate through the results you will be hitting less and less relevant ones anyway.

Pattern for limiting number of simultaneous asynchronous calls

I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing?
Would something like this be appropriate (pseudocode, obviously)?
private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>;
public List<externalSystemObjects> GetObjects(List<string> ids)
{
int callCount = 0;
int maxCallCount = 25;
WaitHandle[] handles;
foreach(id in itemIds to get)
{
if(callCount < maxCallCount)
{
WaitHandle handle = executeCall(id, callback);
addWaitHandleToWaitArray(handle)
}
else
{
int returnedCallId = WaitHandle.WaitAny(handles);
removeReturnedCallFromWaitHandles(handles);
}
}
WaitHandle.WaitAll(handles);
return returnedObjects;
}
public void callback(object result)
{
returnedObjects.Add(result);
}
Consider the list of items to process as a queue from which 25 processing threads dequeue tasks, process a task, add the result then repeat until the queue is empty:
class Program
{
class State
{
public EventWaitHandle Done;
public int runningThreads;
public List<string> itemsToProcess;
public List<string> itemsResponses;
}
static void Main(string[] args)
{
State state = new State();
state.itemsResponses = new List<string>(1000);
state.itemsToProcess = new List<string>(1000);
for (int i = 0; i < 1000; ++i)
{
state.itemsToProcess.Add(String.Format("Request {0}", i));
}
state.runningThreads = 25;
state.Done = new AutoResetEvent(false);
for (int i = 0; i < 25; ++i)
{
Thread t =new Thread(new ParameterizedThreadStart(Processing));
t.Start(state);
}
state.Done.WaitOne();
foreach (string s in state.itemsResponses)
{
Console.WriteLine("{0}", s);
}
}
private static void Processing(object param)
{
Debug.Assert(param is State);
State state = param as State;
try
{
do
{
string item = null;
lock (state.itemsToProcess)
{
if (state.itemsToProcess.Count > 0)
{
item = state.itemsToProcess[0];
state.itemsToProcess.RemoveAt(0);
}
}
if (null == item)
{
break;
}
// Simulate some processing
Thread.Sleep(10);
string response = String.Format("Response for {0} on thread: {1}", item, Thread.CurrentThread.ManagedThreadId);
lock (state.itemsResponses)
{
state.itemsResponses.Add(response);
}
} while (true);
}
catch (Exception)
{
// ...
}
finally
{
int threadsLeft = Interlocked.Decrement(ref state.runningThreads);
if (0 == threadsLeft)
{
state.Done.Set();
}
}
}
}
You can do the same using asynchronous callbacks, there is no need to use threads.
Having some queue-like structure to hold the pending requests is a pretty common pattern. In Web apps where there may be several layers of processing you see a "funnel" style approach with the early parts of the processing change having larger queues. There may also be some kind of prioritisation applied to queues, higher priority requests being shuffled to the top of the queue.
One important thing to consider in your solution is that if request arrival rate is higher than your processing rate (this might be due to a Denial of Service attack, or just that some part of the processing is unusually slow today) then your queues will increase without bound. You need to have some policy such as to refuse new requests immediately when the queue depth exceeds some value.