Reading and outputting from and to a file PROLOG - file-io

I have a programming assignment in one of my classes that is to implement a working binary search tree in Prolog along with a few traversals and other functions. Below is the code I currently have. I cannot figure out how to implement the function readTree, or how to output the integer list given from inorder, preorder, and postorder functions. Any help would be much appreciated!
%testing trees
tree3(node(6, node(4, node(2, nil, nil), node(5, nil, nil)), node(9, node(7, nil, nil), nil))).
tree2(node(8, node(5, nil, node(7, nil, nil)), node(9, nil, node(11, nil, nil)))).
tree1(nil).
%returns true if empty, false otherwise, not working, obviously not the 'prolog' way, but presents the proper outputs, time permitting return to this.
isEmpty(nil) :- write(true).
isEmpty(node(_,_,_)) :- write(false).
%as of now, all traversals must start with an initialization of tree i.e. tree1(T),inorder(T,List).
%inorder traversal of tree
inorder(node(Key,Left,Right), List):-inorder(Left, LL), inorder(Right, LR),
append(LL, [Key|LR], List).
inorder(nil, []).
%preorder traversal of tree
preorder(node(Key,Left,Right), List):-preorder(Left,LL), preorder(Right, LR),
append([Key|LL], LR, List). %, write(List), open(Src, write, File), write(Key), close(File).
preorder(nil, []).
%postorder traversal of tree
postorder(node(Key,Left,Right), List):- postorder(Left,LL), postorder(Right, LR),
append(LL, LR,R1), append(R1, [Key], List).
postorder(nil, []).
%helps all traversal predicates print to file
loop_through_list(File, List) :-
member(Element, List),
write(File, Element),
write(File, ' '),
fail.
write_list_to_file(Filename,List) :-
open(Filename, write, File),
\+ loop_through_list(File, List),
close(File).
%getMin, gets smallest integer in tree
min(node(X, L, _R), Min) :- min_helper(L, X, Min).
min_helper(null, X, X).
min_helper(node(X, L, _R), _X0, Min) :- min_helper(L, X, Min).
%getMax, gets largest integer in tree
%inserting an element into the tree
insert(nil, Key, node(Key, nil, nil)):-
write('Inserted '),
write(Key), nl.
insert(node(Key, Left, Right), Key, node(Key, Left, Right)):-
!, write('Key already in tree\n').
insert(node(Key, Left, Right), Key, node(Key, NewL, Right)):-
Key<Key, !,
insert(Left, Key, NewL).
insert(node(Key, Left, Right), Key, node(Key, Left, NewR)):- insert(Right, Key, NewR).
%delete a key, and shift tree respectively
delete(nil, Key, nil):-
write(Key),
write(' not in tree\n').
delete(node(Key, L, nil), Key, L) :- !. % this clause covers also case for leaf (L=nil)
delete(node(Key, nil, R), Key, R) :- !.
delete(node(Key, L, R), Key, node(Pred, NL, R)) :-
!, get_pred(L, Pred, NL).
delete(node(Key, L, R), Key, node(Key, NL, R)):-
Key<Key, !,
delete(L, Key, NL).
delete(node(Key, L, R), Key, node(Key, L, NR)) :- delete(R, Key, NR).
get_pred(node(Pred, L, nil), Pred, L) :- !.
get_pred(node(Key, L, R), Pred, node(Key, L, NR)) :- get_pred(R, Pred, NR).
%treeRead, not yet implemented
:- debug.

Related

How to shuffle elements of Mutable list in Kotlin?

I wanted to create a MutableList of alphabets and then shuffle them and store it in another MutableList.
I used shuffle() function but it resulted in the original list being shuffled as well which I didn't wanted to happen as I will be using the original list to map it with new shuffled one.
fun main(){
val alphabets = ('A'..'Z').toMutableList()
var shuffAlp = alphabets
shuffAlp.shuffle()
println(alphabets)
println(shuffAlp)
}
So I had to create two mutable list and then shuffle one of them
val alphabets = ('A'..'Z').toMutableList()
var shuffAlp = ('A'..'Z').toMutableList()
shuffAlp.shuffle()
This might be a trivial question but is there any other way where I do not have to create two same list?
shuffle does shuffle into original list, shuffled do and return new list.
And same behavior is for sort & sorted, sortBy & sortedBy, reverse & asReversed:
fun main(){
val alphabets = ('A'..'Z').toMutableList()
val shuffAlp = alphabets.shuffled()
println(alphabets)
println(shuffAlp)
}
Result:
[A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z]
[U, B, A, N, H, R, O, K, X, C, W, E, Q, P, J, Z, L, Y, S, M, I, D, V, F, G, T]

What's the time complexity of Dijkstra's Algorithm

Dijkstra((V, E)):
S = {} //O(1)
for each vertex v ∈ V: //O(V)
d[v] = ∞ //O(1)
d[source] = 0 //O(1)
while S != V: //O(V)
v = non visited vertex with the smallest d[v] //O(V)
for each edge (v, u): //O(E)
if u ∈/ S and d[v] + w(v, u) < d[u]:
d[u] = d[v] + w(v, u)
S = S ∪ {v}
Note: ∈/ means not in, i can't type it in the code.
This question maybe duplicates with some posts.
Understanding Time complexity calculation for Dijkstra Algorithm
Complexity Of Dijkstra's algorithm
Complexity in Dijkstras algorithm
I read them and even some posts on Quora, but still cannot understand. I put some comments in the pseudo code and tried to work it out. I really confuse on why it is O(E log V)
The "non visited vertex with the smallest d[v]" is actually O(1) if you use a min heap and insertion in the min heap is O(log V).
Therefore the complexity is as you correctly mentioned for the other loops:
O((V logV) + (E logV)) = O(E logV) // Assuming E > V which is reasonable
it is O((V logV) + (E logV)) = O(logV * (V + E)) for general graphs.
You wouldn't just assume that the graph is dense i.e. |E| = O(|V|^2) since most graphs in applications are actually sparse i.e. |E| = O(|V|).

Existential goals are filled in too soon

I have a Class containing both data and axioms. I want to build another instance in proof mode, based on (1) an existing instance and (2) some other input. I want to destruct this second input before creating the new instance of the record.
The minimal Class that works as an example is shrunk from one in jwiegley/category-theory:
Require Import Coq.Unicode.Utf8.
Require Import Coq.Init.Datatypes.
Require Import Coq.Classes.Morphisms.
Require Import Coq.Classes.SetoidDec.
Generalizable All Variables.
Reserved Infix "~>" (at level 90, right associativity).
Reserved Infix "∘" (at level 40, left associativity).
Record Category := {
obj : Type;
uhom := Type : Type;
hom : obj -> obj -> uhom where "a ~> b" := (hom a b);
homset :> ∀ X Y, Setoid (X ~> Y);
compose {x y z} (f: y ~> z) (g : x ~> y) : x ~> z
where "f ∘ g" := (compose f g);
compose_respects x y z :>
Proper (equiv ==> equiv ==> equiv) (#compose x y z);
}.
Suppose (2) is bool:
Definition newCat (C : Category) (b : bool) : Category.
Proof.
destruct b.
- eapply Build_Category.
Unshelve.
At this point, obj is filled in with Type:
C : Category
============================
∀ x y z : Type, Proper (equiv ==> equiv ==> equiv) (?compose x y z)
subgoal 2 (ID 18) is:
∀ x y z : Type, (λ _ A : Type, A) y z → (λ _ A : Type, A) x y → (λ _ A : Type, A) x z
This behavior disappears if I remove the compose_respects axiom (or use some other kind of Record without such a field). If I change Category into a Class, obj will be filled in as the obj of C. It seems to have something to do with typeclass resolution (the fact that the equivs have implicit typeclass arguments?).
Is there someway to prevent these (or any!) variables from being filled in with unification? The optimal result would be something like eapply+Unshelve where no existentials are generated at all, and I can fill in the record's fields as subgoals, in order.
It looks like simple notypeclasses refine {| obj := _ |} does the trick.
{| obj := _|} is record syntax that functions as shorthand for Build_Category _ _ _ _ _.
simple notypeclasses refine is all one tactic. It's a variant of notypeclasses refine that doesn't shelve goals and performs no reduction.
Sadly there isn't a generic notypeclasses combinator, unlike unshelve. There's just notypeclasses refine and simple notypeclasses refine.
For debugging, you can use the (undocumented) Set Typeclasses Debug. This reveals that eapply Build_Category does resolve some typeclasses, and refine {| obj := _|} is even worse.
As an aside, I don't think it makes sense to have Class Category without any type-level parameters - why would you ever want just any category automatically inferred?

Split lines in clojure while reading from file

I am learning clojure at school and I have an exam coming up. I was just working on a few things to make sure I get the hang of it.
I am trying to read from a file line by line and as I do, I want to split the line whenever there is a ";".
Here is my code so far
(defn readFile []
(map (fn [line] (clojure.string/split line #";"))
(with-open [rdr (reader "C:/Users/Rohil/Documents/work.txt.txt")]
(doseq [line (line-seq rdr)]
(clojure.string/split line #";")
(println line)))))
When I do this, I still get the output:
"I;Am;A;String;"
Am I missing something?
I'm not sure if you need this at school, but since Gary already gave an excellent answer, consider this as a bonus.
You can do elegant transformations on lines of text with transducers. The ingredient you need is something that allows you to treat the lines as a reducible collection and which closes the reader when you're done reducing:
(defn lines-reducible [^BufferedReader rdr]
(reify clojure.lang.IReduceInit
(reduce [this f init]
(try
(loop [state init]
(if (reduced? state)
#state
(if-let [line (.readLine rdr)]
(recur (f state line))
state)))
(finally
(.close rdr))))))
Now you're able to do the following, given input work.txt:
I;am;a;string
Next;line;please
Count the length of each 'split'
(require '[clojure.string :as str])
(require '[clojure.java.io :as io])
(into []
(comp
(mapcat #(str/split % #";"))
(map count))
(lines-reducible (io/reader "/tmp/work.txt")))
;;=> [1 2 1 6 4 4 6]
Sum the length of all 'splits'
(transduce
(comp
(mapcat #(str/split % #";"))
(map count))
+
(lines-reducible (io/reader "/tmp/work.txt")))
;;=> 24
Sum the length of all words until we find a word that is longer than 5
(transduce
(comp
(mapcat #(str/split % #";"))
(map count))
(fn
([] 0)
([sum] sum)
([sum l]
(if (> l 5)
(reduced sum)
(+ sum l))))
(lines-reducible (io/reader "/tmp/work.txt")))
or with take-while:
(transduce
(comp
(mapcat #(str/split % #";"))
(map count)
(take-while #(> 5 %)))
+
(lines-reducible (io/reader "/tmp/work.txt")))
Read https://tech.grammarly.com/blog/building-etl-pipelines-with-clojure for more details.
TL;DR embrace the REPL and embrace immutability
Your question was "what am I missing?" and to that I'd say you're missing one of the best features of Clojure, the REPL.
Edit: you might also be missing that Clojure uses immutable data structures so
consider this code snippet:
(doseq [x [1 2 3]]
(inc x)
(prn x))
This code does not print "2 3 4"
it prints "1 2 3" because x isn't a mutable variable.
During the first iteration (inc x) gets called, returns 2, and that gets thrown away because it wasn't passed to anything, then (prn x) prints the value of x which is still 1.
Now consider this code snippet:
(doseq [x [1 2 3]] (prn (inc x)))
During the first iteration the inc passes its return value to prn so you get 2
Long example:
I don't want to rob you of the opportunity to solve the problem yourself so I'll use a different problem as an example.
Given the file "birds.txt"
with the data "1chicken\n 2duck\n 3Larry"
you want to write a function that takes a file and returns a sequence of bird names
Lets break this problem down into smaller chunks:
first lets read the file and split it up into lines
(slurp "birds.txt") will give us the whole file a string
clojure.string/split-lines will give us a collection with each line as an element in the collection
(clojure.string/split-lines (slurp "birds.txt")) gets us ["1chicken" "2duck" "3Larry"]
At this point we could map some function over that collection to strip out the number like (map #(clojure.string/replace % #"\d" "") birds-collection)
or we could just move that step up the pipeline when the whole file is one string.
Now that we have all of our pieces we can put them together in a functional pipeline where the result of one piece feeds into the next
In Clojure there is a nice macro to make this more readable, the -> macro
It takes the result of one computation and injects it as the first argument to the next
so our pipeline looks like this:
(-> "C:/birds.txt"
slurp
(clojure.string/replace #"\d" "")
clojure.string/split-lines)
last note on style, for Clojure functions you want to stick to kebab case so readFile should be read-file
I would keep it simple, and code it like this:
(ns tst.demo.core
(:use tupelo.test)
(:require [tupelo.core :as t]
[clojure.string :as str] ))
(def text
"I;am;a;line;
This;is;another;one
Followed;by;this;")
(def tmp-file-name "/tmp/lines.txt")
(dotest
(spit tmp-file-name text) ; write it to a tmp file
(let [lines (str/split-lines (slurp tmp-file-name))
result (for [line lines]
(for [word (str/split line #";")]
(str/trim word)))
result-flat (flatten result)]
(is= result
[["I" "am" "a" "line"]
["This" "is" "another" "one"]
["Followed" "by" "this"]])
Notice that result is a doubly-nested (2D) matrix of words. The simplest way to undo this is the flatten function to produce result-flat:
(is= result-flat
["I" "am" "a" "line" "This" "is" "another" "one" "Followed" "by" "this"])))
You could also use apply concat as in:
(is= (apply concat result) result-flat)
If you want to avoid building up a 2D matrix in the first place, you can use a generator function (a la Python) via lazy-gen and yield from the Tupelo library:
(dotest
(spit tmp-file-name text) ; write it to a tmp file
(let [lines (str/split-lines (slurp tmp-file-name))
result (t/lazy-gen
(doseq [line lines]
(let [words (str/split line #";")]
(doseq [word words]
(t/yield (str/trim word))))))]
(is= result
["I" "am" "a" "line" "This" "is" "another" "one" "Followed" "by" "this"])))
In this case, lazy-gen creates the generator function.
Notice that for has been replaced with doseq, and the yield function places each word into the output lazy sequence.

How to get the map function to not return something?

In sml nj, if you use the map function, your basically saying for each element x in a list, apply the function f on it, and return the list of the new values, but lets say f returns a string, and in f a comparison is done, if the comparison is true, then it returns the string, but if it's false, then it doesn't return anything, and nothing gets put into that list that map is currently building.
Is this possible to do?
Instead of using map, use one of the variants of fold (either foldl or foldr). Another option is, of course, to simply do a filter before you do the map.
As a simple example, imagine that you want to return a list of squared integers, but only if the original integers are even numbers. A filter-then-map approach might look like:
fun square_evens xs =
(List.map (fn x => x * x)) (List.filter (fn x => x mod 2 = 0) xs)
Or, you could use a foldr approach.
fun square_evens xs =
List.foldr (fn (x, xs') =>
if x mod 2 = 0
then (x * x) :: xs'
else xs') [] xs
Slightly longer, but arguably clearer, and probably more efficient.