Would serializing to disk count as aux space complexity? - space-complexity

I have a simple program to serialize a binary tree. Code:
public static <T> void serialize(TBST<T> tree, String fileName) throws FileNotFoundException, IOException {
/*
* using only 1 file will create a lot of confusion in coding.
*/
try (ObjectOutputStream oosNodeData = new ObjectOutputStream(new FileOutputStream(fileName))) {
preOrderSerialization(tree.getRoot(), oosNodeData);
}
}
private static <T> void preOrderSerialization(TBSTNode<T> node, ObjectOutputStream oosNodeData) throws IOException {
if (node == null) {
return;
}
oosNodeData.writeObject(node.element);
preOrderSerialization(node.left, oosNodeData);
preOrderSerialization(node.right, oosNodeData);
}
As we can see, the program itself does not use extra space. It however does what its told - serialize.
Whats the space aux complexity ? O(n) or O(1) ?
please ignore the stack space

This is essentially a recursive tree traversal and as such it would be O(n). See the following for a good overview: Big-Oh for Recursive Functions

Related

Why does runtime reduce when we use functions instead of two nested for loops

I've observed that runtime of a program which has two nested loops is greater then runtime of same code that uses a function instead of inner loop. Why does that happen when we are performing same logic in both cases.
Here is the sample code to print primes in range using function:
public class Main
{
public static void main(String[] args) {
int n;
boolean flag;
Scanner input = new Scanner(System.in);
n=input.nextInt();
long st=System.nanoTime();
for(int i=2;i<n;i++)
{
if(isPrime(i))
{
System.out.printf("%d ",i);
}
}
long et = System.nanoTime();
System.out.println();
System.out.printf("%.4f",(et-st)/1000000000.0);
}
public static boolean isPrime(int n)
{
for(int i=2;i<=Math.sqrt(n);i++)
{
if(n%i==0)
{
return false;
}
}
return true;
}
}
The time taken to run this program is 0.017s for input of 1000.
Now if i use a nested for loop in same code instead of function the execution time became nearly 1.26s.
So my question is why does nesting of loops increase time when we are performing same number of operations in both cases.

Lucene Custom Analyzer - TokenStream Contract Violation

I was trying to create my own custom analyzer and tokenizer classes in Lucene. I followed mostly the instructions here:
http://www.citrine.io/blog/2015/2/14/building-a-custom-analyzer-in-lucene
And I updated as needed (in Lucene's newer versions the Reader is stored in "input")
However I get an exception:
TokenStream contract violation: reset()/close() call missing, reset() called multiple times, or subclass does not call super.reset(). Please see Javadocs of TokenStream class for more information about the correct consuming workflow.
What could be the reason for this? I gather calling reset\close is not my job at all, but should be done by the analyzer.
Here's my custom analyzer class:
public class MyAnalyzer extends Analyzer {
protected TokenStreamComponents createComponents(String FieldName){
// TODO Auto-generated method stub
return new TokenStreamComponents(new MyTokenizer());
}
}
And my custom tokenizer class:
public class MyTokenizer extends Tokenizer {
protected CharTermAttribute charTermAttribute =
addAttribute(CharTermAttribute.class);
public MyTokenizer() {
char[] buffer = new char[1024];
int numChars;
StringBuilder stringBuilder = new StringBuilder();
try {
while ((numChars =
this.input.read(buffer, 0, buffer.length)) != -1) {
stringBuilder.append(buffer, 0, numChars);
}
}
catch (IOException e) {
throw new RuntimeException(e);
}
String StringToTokenize = stringBuilder.toString();
Terms=Tokenize(StringToTokenize);
}
public boolean incrementToken() throws IOException {
if(CurrentTerm>=Terms.length)
return false;
this.charTermAttribute.setEmpty();
this.charTermAttribute.append(Terms[CurrentTerm]);
CurrentTerm++;
return true;
}
static String[] Tokenize(String StringToTokenize){
//Here I process the string and create an array of terms.
//I tested this method and it works ok
//In case it's relevant, I parse the string into terms in the //constructor. Then in IncrementToken I simply iterate over the Terms array and //submit them each at a time.
return Processed;
}
public void reset() throws IOException {
super.reset();
Terms=null;
CurrentTerm=0;
};
String[] Terms;
int CurrentTerm;
}
When I traced the Exception, I saw that the problem was with input.read - it seems that there is nothing inside input (or rather there is a ILLEGAL_STATE_READER in it) I don't understand it.
You are reading from the input stream in your Tokenizer constructor, before it is reset.
The problem here, I think, is that you are handling the input as a String, instead of as a Stream. The intent is for you to efficiently read from the stream in the incrementToken method, rather than to load the whole stream into a String and pre-process a big ol' list of tokens at the beginning.
It is possible to go this route, though. Just move all the logic currently in the constructor into your reset method instead (after the super.reset() call).

Unit Testing for Java 8 Lambdas

I was wondering if anybody find a way to stub/mock a logic inside a lambda without making the lambda's visibility?
public List<Item> processFile(String fileName) {
// do some magic..
Function<String, List<String>> reader = (fileName) -> {
List<String> items = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply("file.csv");
// do some more magic..
}
I would say the rule is that if a lambda expression is so complex that you feel the need to mock out bits of it, that it's probably too complex. It should be broken into smaller pieces that are composed together, or perhaps the model needs to be adjusted to make it more amenable to composition.
I will say that Andrey Chaschev's answer which suggests parameterizing a dependency is a good one and probably is applicable in some situations. So, +1 for that. One could continue this process and break down the processing into smaller pieces, like so:
public List<Item> processFile(
String fileName,
Function<String, BufferedReader> toReader,
Function<BufferedReader, List<String>> toStringList,
Function<List<String>, List<Item>> toItemList)
{
List<String> lines = null;
try (BufferedReader br = toReader.apply(fileName)) {
lines = toStringList.apply(br);
} catch (IOException ioe) { /* ... */ }
return toItemList.apply(lines);
}
A couple observations on this, though. First, this doesn't work as written, since the various lambdas throw pesky IOExceptions, which are checked, and the Function type isn't declared to throw that exception. The second is that the lambdas you have to pass to this function are monstrous. Even though this doesn't work (because of checked exceptions) I wrote it out:
void processAnActualFile() {
List<Item> items = processFile(
"file.csv",
fname -> new BufferedReader(new FileReader(fname)),
// ERROR: uncaught IOException
br -> {
List<String> result = new ArrayList<>();
String line;
while ((line = br.readLine()) != null) {
result.add(line);
}
return result;
}, // ERROR: uncaught IOException
stringList -> {
List<Item> result = new ArrayList<>();
for (String line : stringList) {
result.add(new Item(line));
}
return result;
});
}
Ugh! I think I've discovered new code smell:
If you have to write a for-loop or while-loop inside a lambda, you're doing something wrong.
A few things are going on here. First, the I/O library is really composed of different pieces of implementation (InputStream, Reader, BufferedReader) that are tightly coupled. It's really not useful to try to break them apart. Indeed, the library has evolved so that there are some convenience utilities (such as the NIO Files.readAllLines) that handle a bunch of leg work for you.
The more significant point is that designing functions that pass aggregates (lists) of values among themselves, and composing these functions, is really the wrong way to go. It leads every function to have to write a loop inside of it. What we really want to do is write functions that each operate on a single value, and then let the new Streams library in Java 8 take care of the aggregation for us.
The key function to extract here from the code described by the comment "do some more magic" which converts List<String> into List<Item>. We want to extract the computation that converts one String into an Item, like this:
class Item {
static Item fromString(String s) {
// do a little bit of magic
}
}
Once you have this, then you can let the Streams and NIO libraries do a bunch of the work for you:
public List<Item> processFile(String fileName) {
try (Stream<String> lines = Files.lines(Paths.get(fileName))) {
return lines.map(Item::fromString)
.collect(Collectors.toList());
} catch (IOException ioe) {
ioe.printStackTrace();
return Collections.emptyList();
}
}
(Note that more half of this short method is for dealing with the IOException.)
Now if you want to do some unit testing, what you really need to test is that little bit of magic. So you wrap it into a different stream pipeline, like this:
void testItemCreation() {
List<Item> result =
Arrays.asList("first", "second", "third")
.stream()
.map(Item::fromString)
.collect(Collectors.toList());
// make assertions over result
}
(Actually, even this isn't quite right. You'd want to write unit tests for converting a single line into a single Item. But maybe you have some test data somewhere, so you'd convert it to a list of items this way, and then make global assertions over the relationship of the resulting items in the list.)
I've wandered pretty far from your original question of how to break apart a lambda. Please forgive me for indulging myself.
The lambda in the original example is pretty unfortunate since the Java I/O libraries are quite cumbersome, and there are new APIs in the NIO library that turn the example into a one-liner.
Still, the lesson here is that instead of composing functions that process aggregates, compose functions that process individual values, and let streams handle the aggregation. This way, instead of testing by mocking out bits of a complex lambda, you can test by plugging together stream pipelines in different ways.
I'm not sure if that's what you're asking, but you could extract a lambda from lambda i.e. to another class or as is and pass it as a parameter. In an example below I mock reader creation:
public static void processFile(String fileName, Function<String, BufferedReader> readerSupplier) {
// do some magic..
Function<String, List<String>> reader = (name) -> {
List<String> items = new ArrayList<>();
try(BufferedReader br = readerSupplier.apply(name)){
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply(fileName);
// do some more magic..
}
public static void main(String[] args) {
// mocked call
processFile("file.csv", name -> new BufferedReader(new StringReader("line1\nline2\n")));
//original call
processFile("1.csv", name -> {
try {
return new BufferedReader(new FileReader(name));
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
}
});
}

Store and retrieve string arrays in HBase

I've read this answer (How to store complex objects into hadoop Hbase?) regarding the storing of string arrays with HBase.
There it is said to use the ArrayWritable Class to serialize the array. With WritableUtils.toByteArray(Writable ... writable) I'll get a byte[] which I can store in HBase.
When I now try to retrieve the rows again, I get a byte[] which I have somehow to transform back again into an ArrayWritable.
But I don't find a way to do this. Maybe you know an answer or am I doing fundamentally wrong serializing my String[]?
You may apply the following method to get back the ArrayWritable (taken from my earlier answer, see here) .
public static <T extends Writable> T asWritable(byte[] bytes, Class<T> clazz)
throws IOException {
T result = null;
DataInputStream dataIn = null;
try {
result = clazz.newInstance();
ByteArrayInputStream in = new ByteArrayInputStream(bytes);
dataIn = new DataInputStream(in);
result.readFields(dataIn);
}
catch (InstantiationException e) {
// should not happen
assert false;
}
catch (IllegalAccessException e) {
// should not happen
assert false;
}
finally {
IOUtils.closeQuietly(dataIn);
}
return result;
}
This method just deserializes the byte array to the correct object type, based on the provided class type token.
E.g:
Let's assume you have a custom ArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
}
Now you issue a single HBase get:
...
Get get = new Get(row);
Result result = htable.get(get);
byte[] value = result.getValue(family, qualifier);
TextArrayWritable tawReturned = asWritable(value, TextArrayWritable.class);
Text[] texts = (Text[]) tawReturned.toArray();
for (Text t : texts) {
System.out.print(t + " ");
}
...
Note:
You may have already found the readCompressedStringArray() and writeCompressedStringArray() methods in WritableUtils
which seem to be suitable if you have your own String array-backed Writable class.
Before using them, I'd warn you that these can cause serious performance hit due to
the overhead caused by the gzip compression/decompression.

How can I use Lucene's PriorityQueue when I don't know the max size at create time?

I built a custom collector for Lucene.Net, but I can't figure out how to order (or page) the results. Everytime Collect gets called, I can add the result to an internal PriorityQueue, which I understand is the correct way to do this.
I extended the PriorityQueue, but it requires a size parameter on creation. You have to call Initialize in the constructor and pass in the max size.
However, in a collector, the searcher just calls Collect when it gets a new result, so I don't know how many results I have when I create the PriorityQueue. Based on this, I can't figure out how to make the PriorityQueue work.
I realize I'm probably missing something simple here...
PriorityQueue is not SortedList or SortedDictionary.
It is a kind of sorting implementation where it returns the top M results(your PriorityQueue's size) of N elements. You can add with InsertWithOverflow as many items as you want, but it will only hold only the top M elements.
Suppose your search resulted in 1000000 hits. Would you return all of the results to user?
A better way would be to return the top 10 elements to the user(using PriorityQueue(10)) and
if the user requests for the next 10 result, you can make a new search with PriorityQueue(20) and return the next 10 elements and so on.
This is the trick most search engines like google uses.
Everytime Commit gets called, I can add the result to an internal PriorityQueue.
I can not undestand the relationship between Commit and search, Therefore I will append a sample usage of PriorityQueue:
public class CustomQueue : Lucene.Net.Util.PriorityQueue<Document>
{
public CustomQueue(int maxSize): base()
{
Initialize(maxSize);
}
public override bool LessThan(Document a, Document b)
{
//a.GetField("field1")
//b.GetField("field2");
return //compare a & b
}
}
public class MyCollector : Lucene.Net.Search.Collector
{
CustomQueue _queue = null;
IndexReader _currentReader;
public MyCollector(int maxSize)
{
_queue = new CustomQueue(maxSize);
}
public override bool AcceptsDocsOutOfOrder()
{
return true;
}
public override void Collect(int doc)
{
_queue.InsertWithOverflow(_currentReader.Document(doc));
}
public override void SetNextReader(IndexReader reader, int docBase)
{
_currentReader = reader;
}
public override void SetScorer(Scorer scorer)
{
}
}
searcher.Search(query,new MyCollector(10)) //First page.
searcher.Search(query,new MyCollector(20)) //2nd page.
searcher.Search(query,new MyCollector(30)) //3rd page.
EDIT for #nokturnal
public class MyPriorityQueue<TObj, TComp> : Lucene.Net.Util.PriorityQueue<TObj>
where TComp : IComparable<TComp>
{
Func<TObj, TComp> _KeySelector;
public MyPriorityQueue(int size, Func<TObj, TComp> keySelector) : base()
{
_KeySelector = keySelector;
Initialize(size);
}
public override bool LessThan(TObj a, TObj b)
{
return _KeySelector(a).CompareTo(_KeySelector(b)) < 0;
}
public IEnumerable<TObj> Items
{
get
{
int size = Size();
for (int i = 0; i < size; i++)
yield return Pop();
}
}
}
var pq = new MyPriorityQueue<Document, string>(3, doc => doc.GetField("SomeField").StringValue);
foreach (var item in pq.Items)
{
}
The reason Lucene's Priority Queue is size limited is because it uses a fixed size implementation that is very fast.
Think about what is the reasonable maximum number of results to get back at a time and use that number, the "waste" for when the results are few is not that bad for the benefit it gains.
On the other hand, if you have such a huge number of results that you cannot hold them, then how are you going to be serving/displaying them? Keep in mind that this is for "top" hits so as you iterate through the results you will be hitting less and less relevant ones anyway.