Python library matplolib
does a pretty good job.
ax.scatter
instead of plt.scatter
(as for 2d plots with import matplotlib.pyplot as plt
)from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(X, Y, Z, s = 10, linewidth = 1 )
ax.text(X, Y, Z, str(99))
ax.set_aspect('equal')
(even though we should be able to).
This feature is not implemented yet (what?), so we add the following snippet to make it happen:extents = np.array([getattr(ax, 'get_{}lim'.format(dim))() for dim in 'xyz'])
sz = extents[:,1] - extents[:,0]
centers = np.mean(extents, axis=1)
maxsize = max(abs(sz))
r = maxsize/2
for ctr, dim in zip(centers, 'xyz'):
getattr(ax, 'set_{}lim'.format(dim))(ctr - r, ctr + r)
ax.set_box_aspect((1, 1, 1))
Alternatively, set x, y, and z ranges manually so that the length of the ranges are the same and do ax.set_box_aspect((1, 1, 1))
:
XYZlim = [-3e-3, 3e-3]
ax.set_xlim3d(XYZlim+1e-3)
ax.set_ylim3d(XYZlim - 0.2e-3)
ax.set_zlim3d(XYZlim + 3e-3)
ax.set_aspect('equal')
ax.set_box_aspect((1, 1, 1))
Source
This snippet applies to ax
, so we can convert it into a function def fix_aspect(ax):
and call it.
plt.axis('scaled')
and specify the axes limits using
plt.xlim(a, b)
plt.ylim(c, d)
Caution: plt.axis('scaled')
goes before setting plt.xlim()
and plt.ylim()
for the boundaries to remain faithful.
Plotting each line using a for
loop is too slow. Use LineCollection
and Line3DCollection
instead.
The format of the data has to be in the following format:
[
[ [start_x_1, start_y_1, start_z_1], [end_x_1, end_y_1, end_z_1] ],
[ [start_x_2, start_y_2, start_z_2], [end_x_2, end_y_2, end_z_2] ],
[ [start_x_3, start_y_3, start_z_3], [end_x_3, end_y_3, end_z_3] ],
...
[ [start_x_N, start_y_N, start_z_N], [end_x_N, end_y_N, end_z_N] ]
]
where there are N
lines that start with start
and ends at end
.
Note that The array has rank 3.
Example of such construction:
Given two (N,3)
arrays P_start
and P_end
, to draw N
lines between P_start[i]
and P_end[i]
in 3D, construct
ls = [ [p_start, p_end] for p_start, p_end in zip(P_start, P_end) ]
import matplotlib.pyplot as plt
from matplotlib.collections import Line3DCollection
lc = LineCollection3D(ls, linewidths=0.5, colors='b')
ax = plt.gca()
ax.add_collection(lc)
plt.show()
** Caution: ** For 2D plotting with LineCollection
, you must add a scaling command like
ax.autoscale()
for the plots to show. Otherwise, no line will be printed. (weird bug?)
Note: that the collection added to ax
(and not to plt
).
Say we have a function called write_img(t)
that takes integer values.
The for loop
for t in range(starting, ending):
write_img(t)
can be replaced by
from multiprocessing import Pool
a_pool = Pool()
a_pool.map(write_img, range(staring, ending))
def myfunc(obj):
x = obj.x
y = obj.y
# some stuff with x and y
return obj2
from multiprocessing import Pool
input_list = []
for i in range(10):
obj_i = # some code here ...
input_list.append(obj_i)
a_pool = Pool()
output_obj2_list = a_pool.map(myfunc, input_list)
zip()
to combine input variables or create a partial function to convert it into a one-input function on the fly using partial
from functools
(which is included in the default python installation)def fun2(x, y):
return z = x * y
from functools import partial
fun1 = fun2(x, 5)
from multiprocessing import Pool
a_pool = Pool()
output_list = a_pool.map(fun1, range(10))
AttributeError: 'NoneType' object has no attribute 'pack'
then use
pool.close()
after the multiprocessing is over to delete the pool.
pathos.multiprocessing
(pip3 install pathos
) instead of usual multiprocessing to use parallel processing on partial functions (to use on functions with multiple inputs)from pathos.multiprocessing import ProcessingPool as Pool
If you want to remove a large file from the repository which was accidentally committed and pushed to GitHub (possibly due to a sloppy .gitignore
), do
git filter-branch --force --index-filter \
'git rm --cached --ignore-unmatch path/to/file.jpg' \
--prune-empty --tag-name-filter cat -- --all
Then, to delete the same in the remote repo,
git push origin --force --all
rm -r ...
instead of rm
.git filter-branch -f --commit-filter '
if [ "$GIT_AUTHOR_EMAIL" = "OLD_EMAIL" ];
then
GIT_AUTHOR_EMAIL="NEW_EMAIL";
git commit-tree "$@";
else
git commit-tree "$@";
fi' HEAD
Then, force push the repo with
git push --force
If you are working in a linux environment without superuser privileges, you can set up vim (or nvim) and its plugins in the following way.
python3
support~/.local/bin/vim
) to local vim before the system vim path (/usr/bin/vim
) in your $PATH
variablebin
, lib
, share
) of the extracted tar to ~/.local
so that it can be launched.config/nvim/init.vim
according to instructions and populate with appropriate linesnvim
and issue :checkhealth
to see what does not workIn .bashrc
add
alias vim=nvim
alias vimdiff="nvim -d"
which python3
.vimrc
for nvim
to specify the path of python3let g:python3_host_prog = <full path from `which python3` output>
pynvim
usingpip3 install pynvim
:checkhealth
to see if there is still an issue.wget -c https://nodejs.org/dist/v16.13.0/node-v16.13.0-linux-x64.tar.xz
$PATH
in ~/.bashrc
and sourcenode
command runsneovim
using ` npm install -g neovim `:PlugInstall
command you will get the error that the Release
branch could not be foundcd $HOME/.vim/plugged/coc.nvim
npm install
nvim
and check that coc is not showing any error:CocInstall coc-pyright
I use ranger as a browser within vim via the plugin francoiscabrol/ranger.vim
. To install ranger via pip,
pip3 install ranger-fm
In the local environment, I have noticed the following issues.
nvim
takes a while to loadnvim
is very slow, after that it works well.config/ranger
If we want to merge the branch remote_branch
to the branch local_branch
(the terms remote
and local
are well-defined in that sense), we do the following.
git branch local_branch
local
) branch usinggit checkout local_branch
remote
) to the current one (local
)git merge remote_branch
Note: you may get ‘branch does not exist’ error, in that case, checkout
to the remote branch first and then get back to the local one.
git mergetool
git branch -d remote_branch
git push -d origin remote_branch
git merge --strategy-option theirs
git merge --strategy-option ours
~~
Caution: Better alternative is to go through git mergetool
and open it with vimdiff
and then
:%diffget RE
:%diffget LO
vimdiff
as a mergetool::cquit
(or :cq
, to exit with an error code) so that next time git mergetool
will launch vimdiff
again. Otherwise, the file is all messed up with strings like HEADER >>>>>>
and is saved as a real file and mergetool
does not do anything.If you quit vimdiff
midway and would like to reset the merge again,
git reset --merge
git merge remote_branch
git mergetool
or with
git mergetool tool=vimdiff
if git is not configured.
Resize vimdiff window after maximizing using Ctrl+w =
There are 3 windows on top:
Bottom window: final version of the merged file in the local branch, after merging
[c
and ]c
), changing lines if needed. To add changes from the remote version (i.e. from branch remote_branch
), do :diffget RE
Alternatively, BA
or LO
for base or local.:%diffget RE
::%diffget LO
If the lines get misaligned, do :diffupdate
.orig
files using git clean -fd
git add filename
(check if this is needed with git status
)git commit -m 'merged remote_branch with local_branch'
git branch -d remote_branch
Caution: attempting to delete the remote branch without committing the merged local branch first will throw warning sign. That would be a reminder that the current needs to to committed.
git reset --hard
git clean -f
git pull
Here, clean -f
to remove untracked files. clean -fd
deletes untracked directories as well. Before clean -f
, we can do clean -d
to see which files are to be removed.
git stash
git pull
Then to pop back the local uncommitted changes on top of that using
git stash pop
To delete the stash, do git stash drop
instead.
Desired Method: Pull from the internet and put the current committed changes as next commit after the remote version (nice explanation for the reason to rebase)
git pull --rebase
(or just git pull -r
)
This might need merge-conflict resolution
git mergetool --tool=vimdiff
and cleanup
git clean -df
Then, to indicate that rebasing is done, we do
git rebase --continute
followed by git push
to push the final local change. (Note that git commit
is not required and has been done automatically)
This creates a commit history of the order last remote commit, merged local commit
.
This (--rebase
) is better for history and creates 2 commit histories compared to just git pull
(with default --merge
behavior), which creates 3 commit histories in the order last remove commit, last local commit, merged commit
.
To recover from a failed rebase
and revert to the state before the attempted pull
(i.e. local changes remain in local directory), do
git rebase --abort
Note: git pull --rebase
should be used over just git pull
where we do not want to advertise that a merging has been done, e.g. when working on the same branch. This is the most common scenario.
git diff branch_1 branch_2 -- filename.txt
The output will have the following:
-
)Lines in branch_2 (and not in branch_1) are in green (and with +
)
To use vimdiff
, replace git diff
with git difftool
.
To change a conflicted file (with conflict markers) into the state of its last commit, do git restore filename
git log
or with ` git log -p ` to see all the changes (patches) to files. Other useful options are --pretty=oneline
or stat
.
.orig
files (record of merging) after merging is done usinggit clean -fd
.gitkeep
file (conventional name)touch empty_dir/.gitkeep
and tell to .gitignore
to not ignore it using
# Inside .gitignore
empty_dir/*
!empty_dir/.gitkeep
git rebase
linearizes two diverging branch heads. If we have a diverging branch exp
of the main
branch, we can dogit checkout exp
git rebase main
to make the changes of exp
as a next commit to the ones in main
. At this point, the last commit of main
becomes a previous commit for exp
. So, we can just merge exp
to main
using
git checkout main
git merge exp
Q: How to change files with conflict markers to its state before attempted merge?
git merge --abort
Note, for uncommitted files, this might revert the files back to last committed state.
Let’s assume that we created an account on GitHub with user <username>
and created a repository called testconfigs
.
ssh-keygen
No passphrase, no name (this is important because we won’t be adding this public key using ssh-add
). Since this has the default name, the program accepts it by some accident. This needs a fix.eval `ssh-agent -s`
(Here, these are two backticks, not apostrophes). Then run:
ssh-add
cat ~/.ssh/id_rsa.pub
Copy this output of id_rsa.pub
and paste it in the website’s ssh-key field after logging in.
Then check that you have ssh access to GitHub using:
ssh -T git@github.com
(Caution: not your username or id, but the username git
)
This is most probably a routine check that you have the access. This does not give you a remote tty or you do not need to stay logged in to perform git tasks. You can probably skip this step.
git config --global user.email "email@domain.com"
git config --global user.name "Name"
.ssh/authorized_keys
so that you can login without password, do.ssh/id_rsa.pub | ssh user@server 'cat >> .ssh/authorized_keys'
user@server
address, add a nickname in your .ssh/config
like this:Host nickname
User <username>
HostName serve.server.address
Change the URL origin type of your repository to an ssh-based one using:
git remote set-url origin git@github.com:<username>/<repository_name>.git
After that, any git pull
/push
will go through without having to enter the password.
Create some empty directories in your repository, which will be your working directory. Then run:
git init
This will create a .git
directory inside the local repository.
Now add the address of the remote host, which we are naming gitty
, using
git remote add gitty git@github.com:<username>/testconfigs
The repository testconfigs
will be referred to as gitty
.
You can see all the remote hosts by: git remote -v
If needed, you can delete a remote host by : git remote rm nickname_of_host
Now download all the files from the master branch of gitty
by: git pull gitty master
Now your working directory will have all the files stored in gitty
. You can modify them, add more files etc. After modification, add the files to the modification list using:
git add file1
git add file2
git add file3
etc.
After they are added to the changelist, it is the time to commit to the change and add a comment about the changes:
git commit -m "Added file1, file2, changed file3"
After committing, it is time to upload the change back to the place by:
git push gitty master
(Obviously we are uploading to the master branch here)
(Here, for the first time you may be asked to enter your email id and username. Follow the instruction to add these info to the file ~/.gitconfig
)
And we are done.
Create a new repository in the website. Copy the clone address. Locally do
git clone https://github.com/username/reponame
Now you have a local repo.
cd reponame
Add files/make changes e.g. cp path/file .
Add to the git list :
git add file
git commit -m 'message'
git push
Provide username and password for your git repo. If you don’t want to enter username/password repeatedly, add a remote repo:
git remote add petname git@github.com:username/reponame
Next, push to the repo called petname
like this
git push petname
Then you won’t be asked for credentials anymore. And by default it will get pushed to the master branch .
git ls-files
list:git commit -am "Staging and committing all the modifications done to files in git ls-files"
Alternatively, get commit -a -m text"
works.
Esc
key with the pinky fingerBackspace
with pinky finger as wellS
(substitute) deletes the current sentence and goes into the edit (compared to DD
where the current line vanishes):set textwidth=69
and :set colorcolumn=+1
breaks the line after 69 characters. (linebreak
makes sure that that words do not break in the middle). However, after editing for a while in this mode, lines will have different width. It can be fixed with gq
command: gqap
(here, ap
is a paragraph)s
is a better alternative to r
to edit stringsW
, E
, B
instead of w
,e
,b
to avoid special characters. Therefore, dW
, dE
, dB
etcCrtrl+E/Y
to move page without moving the cursor so that I don’t have to look at the bottom of the screen all the time; Even better: zz adjust the screen so that the cursor is at the middle of the screen. zb
and zt
to place it in the top and bottom.tx
to jump till the next occurrence of character x
– instead of fx
which brings the cursor on the character. So, use dtx
instead of dfx
to delete till the character x
, instead of including the character x
. Useful for deleting till the next parenthesis with dt)
.f
or F
or t
or T
is pressed followed by a character, pressing ;
and ,
will take the cursor to match the same character further in that direction. Use #
and *
for the occurrence of the same word under the cursor.gf
and gx
to open the file or hyperlink under the cursor. gf
on a hyperlink will open the html file for editing while gx
will launch it in the browser.g;
and g,
to move through the changelistddp
to switch current line with the next, like xp
C
or D
- change or delete the rest of the lineU/u
on a visual selection to make all uppercase/loswercase and ~
to flip the case=
to fix indentation of code like this: ==
to fix the correct line, =5j
to fix the next 5 lines, place cursor on an opening brace {
and press =%
to fix until the matching closing brace i.e. the whole block. Finally, if the cursor is inside a code block of {.}
, pressing =a{
will fix the indentation of the whole blockCtrl+w
to delete word back in insert mode. In fact, I have also mapped Ctrl+Backspace
to the same to be consistent with browser editing. However, this does not work on terminal though.Ctrl+a
to insert string inserted in last edit mode, while being inside the insert mode. Pressing .
does the same in edit mode.:set ic
to set ignorecase and :set noic
to bring back case while searching"
to jump to the place last exited buffer, `
to jump back to where you jumped from, `[
or `]
to the start or end of last yanked or changed buffer, `<
or `>
to the start or end of the last visual selection.:reg
to see all the registers, :reg p q r
to see only those registers, registers 0-9
fills up with deleted lines."rY
yank current line to the buffer "r
, use "rp
to paste the content of buffer "r
. In insert mode type Ctrl+r
then s
to paste the content of buffer s
."=
can do integer calculations of evaluate commands like system('ls')
and can be used via Ctrl+r=
or "=p
"%
is the current filename and ":
is the most recently executed vim commandm
) then type @m
to execute it. In fact, when you record a macro (with q
), it gets copied to the buffer of the same name.:ab <abbreviation> <full\ name>
will expand abbreviation while typing, use :una <abbreviation>
to remove the rule.Ctrl+A
, Ctrl+X
to increment or decrement the first number found after the cursorg Ctrl+g
to get work/line/char count and just ctrl+g
to show the file info (also :%
) and number of lines etc.g-
or :earlier
to return the buffer to the earlier state, like undo u
but does not erase the other undo branch. Similarly g+
and :later
. Finally :earlier 1m
or 1h
or 5f
(f
for number of save times ago)[[
to jump to the previous opening brace. Similarly ]]
. Also, [{
, [(
will take you to the previous unmatched opening brace and parenthesis. Similarly for [}
and [)
.o
and O
. To do the same without going into insert mode: mapped oo
and OO
for that reason, but this also makes o
and O
very slowdB
does not work): done with dge
and dgE
clipboard=unnamedplus
in your vimrc
gj
gk
The GNU Netcat is TCP/IP based networking utility.
If you do not have netcat
installed, install it first. On Arch, you must install openbsd-netcat
. I accidentally installed the gnu-netcat
and it did not support many of the commands and the standard examples from the internet failed. Ubuntu already has netcat
installed.
After installation, use it using nc
.
-l
for listen, -v
for verbose)
nc -l -v 1432
nc server_name_or_ip 1432
If it succeeds, cursors on both of the computers will wait. Now type text in one terminal to send it to the other. Congratulations, you have a highly insecure client-server setup running.
Syntax folding in vim is a powerful feature that makes navigation very easy. Folding can also be used to create an ad-hoc index for the file. Here are my frequently used folding commands.
* | Keyseq | Description |
---|---|---|
zi | switch folding on or off | |
* | za | toggle current fold open/closed |
zc | close current fold | |
* | zR | open all folds |
* | zM | close all folds |
zv | expand folds to reveal cursor | |
* | zj,zk | jumps to the next/previous fold, even when unfolded |
zo,(zO) | jumpsopen fold (recursively) | |
zc,(zC) | jumpsclosefold (recursively) | |
* | zA | toggle folds recursively |
You can create a vim config file customfolding.vim
for a particular file and launch the file with vim -S customfolding.vim
.
Options in vimrc | Description |
---|---|
set foldnestmax=10 |
“deepest fold is 10 levels |
set nofoldenable |
“dont fold by default |
set foldlevel=1 |
“this is just what i use |
set shiftwidth=1 |
“to consider spaces as one foldlevel away |
Additionally, you can add the following to the custom vimrc for better searching
" DO not expand folds while searching
set foldopen-=search
to stop folds from opening on search, which is even better. Moreover, use
:folddoopen s/old/new/ge
to replace old
with new
in the lines which are not folded.
I keep my smartindent
on, hence I use <F2>
before pasting into the file.
Here is an example of a vimrc where lines starting with one whitespace will be folded into the preceding line:
set smartindent
setlocal foldmethod=expr
setlocal foldexpr=(getline(v:lnum)=~'^$')?-1:((indent(v:lnum)<indent(v:lnum+1))?('>'.indent(v:lnum+1)):indent(v:lnum))
set foldtext=getline(v:foldstart)
set fillchars=fold:\ "(there's a space after that \)
highlight Folded ctermfg=DarkGreen ctermbg=Black
" DO not expand folds while searching
set foldopen-=search
The filetype I handle most frequently is pdf. Here is a list of my most frequently used pdf-related actions.
pdftoppm -png input.pdf output
pdftk
is another brilliant tool to operate on pdf file. The main thing to remember is cat output
.
pdftk file1.pdf file2.pdf cat output outputfile.pdf
pdftk file.pdf cat 2-6 output outputfile.pdf
pdftk file.pdf cat 1-12 13-end output outputfile.pdf
You can also use pdftk
to encrypt pdf files. See man pdftk
. It’s amazing.
Taskwarrior is a command-line based productivity software. You can use it to maintain a simple shopping list, but it is capable of much more. It is lightweight, open-source and most importantly, terminal-based. This makes it much more powerful than other TODO lists such as Trello.
It is easy to run a Taskwarrior server locally and manage tasks over a group of people.
task
pacman -S task
sudo apt-get install taskwarrior
Then run it using task
. You’ll be prompted to create the config file. Say yes.
Modify the .taskrc
file to use the light-256 theme, which works best with white terminal background. If you ever consider shifting to a darker terminal, consider exploring other themes. I recommended dark-256, obviously.
git add
the files: .taskrc
, and the directory .task
so that you can have them forever.
task add "Name of the task"
task
task ID done (IDs are 1, 2 3, ... etc)
task ID delete
task ID modify "new and modified task description"
task ID annotate "and another thing"
task ID modify project:projectname due:duedate
task ID1 ID2 ID3 modify project:projectname
task 1 modify until:eoy
task 1 modify wait:30th
task project:projectname modify +tagname
task project:prj
task project:prj tag:
task project:prj tag: modify +newtag
task started (write "task reports" to see all possible options)
This part is taken from best practices suggested by the author.
task ID modify project:Home
task ID modify due:31st
task ID start
task ID modify priority:M
task ID modify +problem +house
+next
to a task, to boost its urgency:
task ID modify +next
task ID modify depends:OTHER_ID
The rules for date and time format can be changed from rc.dateformat
setting, but the default setting prints the date in the British format.
Example:
task add Open the store due:2015-01-31T08:30:00
task add Pay the rent due:eom
Here the synonym eom
means ‘end of the month’. Synonyms are a useful shortcut to entering lengthy dates. Here is the full set:
now
today
sod
eod
yesterday
tomorrow
monday
january
later
someday
soy
eoy
soq
eoq
som
socm
eom
eoc
Here, eo
=end of, so
=starting of (the next), soc
=starting of the current, d
=day, w
= week, m
=month, y
=year, q
=quarter etc. See here for more details.
The rc.dateformat
setting in taskrc
allows you to specify other formats for date input. It supports standard date and time formats (without the %
).
A recurring task is a task with a due date that keeps coming back as a reminder. Here is an example:
task add Pay the rent due:1st recur:monthly until:2015-03-31
To get rid of unnecessary info that is displayed, this page can be a lifesaver. Here is what I found most useful.
Adding the line
verbose=no
to the file .taskrc
will help you get rid of the footnote and header.
Here is an example of a custom report called “verybasic” which contains very specific columns that we want, in our desired order. Either add the following lines to .taskrc
or add task
to the beginning of every line and run individually in terminal (both have the same effect):
report.verybasic.description='A list with very basic information, created by me.'
report.verybasic.columns=id,project,tags,description.count,due
report.verybasic.sort=start-,urgency-
report.verybasic.filter=status:pending
To make this report your default output report, add to your .taskrc
:
default.command=verybasic
or, issue this command in terminal: task config default.command 'verybasic'
To see this custom report in action, just run task verybasic
. Detailed documentation found here.
task undo
task calendar
You can combine more commands to generate desired output.
vit
. Compile from the AUR (on Arch). Alternatively, here is the github.The website freecinc provides a service to host your tasks so that you do not need to set up your own server. To use this, simply log in to the website and follow their instruction. After you are done, in order to sync manually, run task sync
in your client and done. To automate it, add this to your crontab -e
:
# Syncing task warrior
3 */2 * * * task sync && notify-send "Syncing Tasks"
which sends out a little notification every time syncing is performed.
On Android, the app Mirakel (Caution: outdated) gives you a way to upload a specially created config file so that you can sync with your server (freecinc, in this case). Format of this config file:
username: foo
org: bar
user key: <your key here>
server: localhost:6544
client.cert:
-----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----
Client.key:
-----BEGIN RSA PRIVATE KEY-----
…
-----END RSA PRIVATE KEY-----
ca.cert:
-----BEGIN CERTIFICATE-----
…
-----END CERTIFICATE-----
Get your username, org and server from the line starting with taskd.credentials=
in the file ~/.taskrc
. The format of that line is
taskd.credentials=org\/username\/user_key
Note that the org comes before the username. Get your server from the line staring with taskd.server=
.
Finally, insert the contents of the file *.cert.pem
, *.key.pem
, *.ca.key
.
Save it with any name and send it to the mobile phone so that you cam import it into Mirakel.
Tips: Use arithmetic operations in attributes:
task add newtask due:2days scheduled:due-1day
My new favourite is inthe.AM. This one has a web front-end as well as a nice server. Works similarly. You have to login with your email.
To use Mirakel effectively, I use several UDAs which are defined in my taskrc
.
Task Warrior for Andriod is the most functional development so far, although it is a bit buggy at times.
It is easy to set up Kanban or GTD with Taskwarrior. There are many extensions. Here is a whole list of tools developed around it. Here are a few.
If you have an Xserver running in a remove computer, you can perform GUI related actions via ssh.
After logging in, set the DISPLAY
variable to :0.0
or to :0
or any other if necessary (which can be found by issuing w
command and looking at the from
field).
Export the display as an environment variable using:
export DISPLAY=:0.0
Then issue your GUI commands like you do in the terminal.
Note: apparently you cannot run a gui program in other user’s tty even by setting the DISPLAY
to the user’s tty.
First export DISPLAY
as above.
There is a software called zenity
that’ll do your work. See zenity --help
for more options. Example:
zenity --info --text "hello there!"
While loggin in via ssh, use -X
to allow the local Xserver to display the output of the remote Xserver. e.g.
ssh -X user@remote-host:~/
firefox
CoCalc is probably the best outcome of collaborative opn source efforts as of today. It is a collaborative online computing environment, mostly for Sage (an open source mathematical software). Apart from sage worksheets, it allows octave, jupyter notebooks, latex and R document editing, and many more. You can just create an account in their website and start using right away.
But the best part is that you can install Cocalc as a server and let others log in to it and collaborate. No need to install sage
or sagemath
.
Follow the instruction to install the cocalc-docker
. It is just one
line long code. It will download,
extract and install. I had to use sudo to get it to work since I
installed docker using sudo (sudo apt-get install docker.io
).
To open it in browser, make sure to use https://localhost
instead of
http://localhost
.
I had create account (fake email ids are ok, since they are local)
To start and stop cocalc
, use
sudo docker start cocalc
and
sudo docker stop cocalc
view()
. To
print out the latex code of it, use the function latex()
.To turn the latex rendering on by default, you can use %typeset_mode
to True
at the beginning of the sheet
u_th_x = var('u_th_x', latex_name = 'u_{\\theta}(x)'}
Later, do view(u_th_x)
to see it rendered correctly as
\(u_\theta(x)\).
Installing locally (i.e. in your ~
directory):
install.packages("ggplot2")
System-wide installation in Ubuntu (and saving space in /home
):
sudo apt-get install r-cran-ggplot2
tran <- read.csv(filename, header=TRUE)
names(tran)
tail
) few rows (optional argument n=5
to specify how many rows to display)head(tran)
or
tail(tran, 3)
str(tran)
Category
levels(tran$Category)
tran
and tran$Date
be in a date format (possibilities)tran$day <- weekdays(as.Date(tran$Date))
DoW
)daily$DoW <- factor(daily$DoW, levels= c("Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"))
daily[order(daily$DoW), ]
barplot(table(tran$day))
t <- Sys.Date()
dayseq <- weekdays(seq.Date(t,t+6,by=1))
Get the corresponding weekday value by
weekdays(dayseq)
or,
daynames <- weekdays(dayseq, abbreviate=TRUE)
Assume that you have a data with a variable name Category
, and Catergoy
can be either Grocery
, Shopping
or Travel
. We would like to anonymize the data by renaming the 3 categories by the numbers 1
, 2
, and 3
.
In order to do that, first convert the variable into a factor using:
data$Category <- factor(data$Category)
Then, you can use levels(data$Category)
to get a vector with only 3 variables. You can change the factor data$Category
the way you change a vector.
The problem is to edit an entry in the data frame which is a category type. For example, if you want to change data[4,"Category"]
to hello
, you cannot change it using data[4,"Category"] <- "hello"
!!!
Here is what you should do instead:
data$Category <- as.character(data$Category)
data[4,"Category"] <- "hello"
data$Category <- factor(data$Category)
It is a bit annoying.
qplot(x=Date, y=Amount, data=tran, geom=c('point','line'), color=Category, alpha = I(0.7))
qplot(factor(timeS), data=tran, geom="bar", fill=factor(Category))
Grocery
, Shopping
and Travel
represented by different colors. The x-axis is time span.ggplot(tran, aes(timeS, fill=Category)) + geom_bar() + facet_wrap(~ User)
A slight invariant:
ggplot(tran, aes(timeS, fill=User)) + geom_bar() + facet_wrap(~ Category)
stat='identity'
is the option that lets you plot y vs x instead of the default statistics count.
ggplot(tran) + geom_bar(aes(timeS, Amount, fill=Category), stat='identity')
With separate user:
ggplot(tran) + geom_bar(aes(timeS, Amount, fill=Category), stat='identity') + facet_wrap(~ User)
ggplot(tran) + geom_bar(aes(x=timeS, y=Amount, fill=User), stat='identity') + facet_wrap(~ Category, nrow = 2)
Facet_wrap
w.r.t User of all categories with a greyscale of total amount of both users in the backgroundggplot(tran) + geom_bar(aes(timeS, Amount, fill=Category), stat='identity') + geom_bar(data=transform(tran, User=NULL), aes(x=timeS, y=Amount), stat='identity', alpha=I(0.2)) + facet_wrap(~User)
To have a picture in the background of every facet, we need to create a facet without the facet variable. For example, in the previous case, transform(tran, User=NULL)
gives you a data without the facet variable ~User
. We plot a bar geometry of this data.
Alternate representation with Category
and User
interchanged
ggplot(tran) + geom_bar(aes(timeS, Amount, fill=User), stat='identity') + geom_bar(data=transform(tran, Category=NULL), aes(x=timeS, y=Amount), stat='identity', alpha=I(0.2)) + facet_wrap(~Category)
In this context, manysum
is nothing but
ggplot(tran) + geom_bar(aes(timeS, Amount, fill=User), stat='identity')
A git-like command line tool called drive has implemented all the features though auth2 secret key.
Follow the instructions in its readme
file, run
drive init ~/gdrive
to start the service. In the command line, it will give you a link to visit to get a secret key which you enter in the command line. After that pull, push etc covers everything, Look at the Readme in the github for all information.
After drive init
, go to the directory you initiated drive and then
drive pull 'directory name'
pulls all the files, directories etc of ‘directory’ name.
Similar to git, to push files to drive, you can create subdirectory inside the local gdrive structure. For example,
cd ~/grive
mkdir new-folder
cd new-folder
cp ~/newfile ./
drive push
Be aware, if you do not have a copy of the remote files in the local location, drive push will delete the ones that are not present in the local path or in its children.
Issue: the authentication token expires occasionally, forcing you to follow some link, clicking on “Accept” and generate token, copy-paste it to the terminal window.
Either
drive trash 'file to be trashed'
Or, remember the name (with path) of the directory you want to delete. rm -r
in local path. Then
drive push 'path to the deleted file'
to delete it remotely.
drive emptytrash
drive delete 'file to be deleted forever'
drive list
drive list -depth 3
drive list --matches mp4 pdf mp3
drive list --sort modtime,size_r,version_r Photos
drive stat -r 'directory name'
or until depth 3
drive stat -depth 3 'directory'
$ drive new --folder flux
$ drive new --mime-key doc bofx
$ drive new --mime-key folder content
$ drive new --mime-key presentation ProjectsPresentation
$ drive new --mime-key sheet Hours2015Sept
$ drive new --mime-key form taxForm2016 taxFormCounty
$ drive new flux.txt oxen.pdf # Allow auto type resolution from the extension
drive quota
in more detail
drive about
$ drive copy -r blobStore.py mnt flagging
$ drive copy blobStore.py blobStoreDuplicated.py
$ drive rename url_test url_test_results
$ drive rename openSrc/2015 2015-Contributions
$ drive move photos/2015 angles library archives/storage
Command Aliases: drive supports a few aliases to make usage familiar to the utilities in your shell e.g:
cp : copy ls : list mv : move rm : delete
The url
command prints out the url of a file. It allows you to specify multiple paths relative to root or even by id
$ drive url Photos/2015/07/Releases intros/flux
pidof processname
or,
ps aux | grep processname
kill pid
kill -9 pid
killall processname
or
killall -9 processname
lsof | grep /mountpoint
Then kill it using
killall/kill [-9] processname/pid
Often times, you might be asking yourself, what are the parameters for extracting a tar
file? (answer: -xvf
) In any case, to avoid memorizing individual parameters for zip, tar, rar etc, use atool
.
Install atool
using apt-get
: sudo apt-get install atool
.
Commands like apack
, aunpack
, acat
from atool
are now at your disposal.
foobar.tar.gz
to a subdirectory (or the current directory if it only contains one file):
aunpack foobar.tar.gz
foo
and bar
:
apack myarchive.zip foo bar
comp.zip
:
apack comp.zip *
stuff.rar
:
als stuff.rar
xfce4-terminal
I moved from xterm
to xfce4-terminal
due to its ability to resize the font on the fly using Ctrl Shift +/-
. xterm
lacked many features which I wanted to use while keeping it lightweight. I have modified some aspects of it though.
xfce4-terminal
provides an option which changes the background color automatically of each terminal window. What’s better, it chooses colors from a set of dark and light colors depending on your current terminal colorscheme.~/.bashrc
to colorcode the prompt to reflect different information such as disk space, tty type etc.~/.inputrc
to use vim keybinding for the terminal. Therefore, in addition to using all basic vim commands, I can press v
to edit inside a vim instance. Moreover, no need to remember the whole set of editing and movement shortcuts that comes with the terminal.xfce4-teminal --drop-down
Apart from many shortcuts a linux terminal provides, here are some of my favorite ones.
In terminal, !!
means the last command!!!! And !$
is the argument of the last command.
For example, if vim /etc/file
gives you permission error, run sudo !!
which is essentially sudo vim /etc/file
.
Also, vim !$
means vim /etc/file
.
Useful example: mkdir longdirectoryname
. to enter the directory, do cd !$
.
cat
in it from the history, do
!cat:p
.cat
in it ffom the history, do !cat
.history | grep cat
and say the command number 455 is the right one that you want to run. Do !455
.instead of cp /etc/file /etc/file-old
, do cp /etc/file{,-old}
or, instead of mv /etc/file.txt /etc/file.pdf
fo ` mv /etc/file.{txt,pdf}
So, empty field inside
{.,.}` means itself.
Of course, you can define your own alias
in .bashrc
, but these shortcuts will save you a lot of typing effot.
Lifehacker has a few more tips.
After using a Linux for a while, if you may realize that your /root
or /home
partition needs more space than anticipated. So, it might be useful to move these directories to their own partitions, if you haven’t done so originally. This setup also has the advantage that you can format and install a new copy of Linux without losing the user data. Here is how to move /root
to its own partition.
/dev/sda7
. Mount it in a directory, say /otherlin
/root
to /otherlin
using the following commands
cp -urp /root/. /otherlin/
-r
for recursive copy and -p
is to preserve permission, date etc.
Here, -u
means update, i.e. doesn’t overwrite already copied files.
Note that, cp -rp ~/* /otherlin
is not enough since it does not copy the dot files.
Alternative tools suggested: rsync
, and curl
(nice use!) to have a progress bar etc.bkid
/etc/fstab
and add the line
UUID=daef66f2-4c7a-4daa-9d7d-f217a3a3994f /root ext4 rw,relatime,data=ordered 0 2
Here, the UUIS should be replaced by the appropriate one.
0 means it doesn’t have to be backed up. 2 means the system checks the partition after the first one (1 is /
so /
gets checked first; 0 means no checking).
mv /root /root.old
It failed the first time and had to uncomment the new line in fstab and reboot and try again to succeed.
mount -a
The linux command du
(see man du
) estimates file space usage. To see the the total size of the directory directory
along with all its subdirectories, do
du -sh direcory/
Here, the -s
parameter shows the total sum of all the files and subdirectories and -h
shows the result in a human readable format.
To show the size of all directory and subdirectory in the current location in ascending order, use
du -sh ./* | sort -h
Here, sort -h
sorts in the human-readable format.
This method does not show you the hidden files or directories. So, here is one trick to do exactly that:
du -sch .[!.]* * | sort -h
The parameter -c
produces a grand total. The shell pattern following the parameters make sure we are searching for all files including the ones that start with a dot. Whatever. There is a program specifically made for disk analyzing and cleanup purposes and it is awesome. It’s called ncdu.
Just install it and run ncdu
and feel the wind.
Xournal is an excellent lightweight software for note-taking with active stylus. However, the development is slow and many community-developed features are not always a part of the main release.
Comment: I would recommend checking out Xournalpp, which is a re-write of xournal in C++.
So, in Ubuntu, apt-get
install doesn’t give you the latest version (e.g. now it is the 2.8.2016 version) or the patches the community developed. Here are the steps.
To install the development dependencies, do
sudo apt-get build-dep xournal
Download the latest tarball, extract it with
tar -xf xournal.**.tar.gz
cd xournal.**
Details of patching can be found here: Here is the summary:
Patch the source code with the download patch file
patch -p1 < patchfile.patch
Note: if the patchfile is a tar file, extract and then -p1
should be changed to -p2
or the required number of depth of path.
Compile and install this modified source file, following this link. The steps for this can be summarized by:
make clean
./autogen.sh; ./configure --prefix=$HOME; make; make install; make home-desktop-install
The last step is to set mime-type etc.
./autogen.sh
make
sudo make install
sudo make desktop-install
So far, I have used the following patches: various improvements, linewidth-patch, vi-style scrolling
Caution: If you use the linewidth-patch, older versions of xournal will not be able to open the files saved by the patched version. Chances are, this patch will not end up in the official version in future, making all the files created with the patched version useless.
To log the input events, add the following line at the top of the file src/xo-callbacks.c:
#define INPUT_DEBUG 1
This will return all the input events from xinput list
to the terminal window from which which launch xournal.
Then, compile and run from terminal to see all the input events popping up at the terminal.
If you are using a DIY distribution like Arch, this is a necessary thing to do after every reboot. So, it is better to wrap the steps up in a script and automate it using rc.d
. Say, I have 3 partitions that I would like to mount to 3 directories.
/media/
and create 3 directories called windows
, disk3
, storage
.
cd /media/
sudo mkdir windows disk3 storage
mounting
with following content:
##startup script to mount the disk drives
sudo mount /dev/sda1 /media/windows
sudo mount /dev/sda3 /media/disk3
sudo mount /dev/sda5 /media/storage
#EOF
chmod +x mounting
/etc/init.d/
:
sudo cp mounting /etc/init.d/
rc.d
so that the script is executed at startup:
sudo update-rc.d mounting defaults
Good documentations are as important as good lines of code. No one likes to dig up their old code and not understand whatever they wrote years ago. So, writing documentations should be a simple process. And it is!
Linux manpages or manuals are written in something called a troff markdown language. It is not your regular markdown, but uses macros to format text.
In order to preview the manpage file filename
, use
man -l filename
The lines in the manpage should start with macros starting with dots. Here are some example macros:
Command | Description |
---|---|
.Dd | date-of-modification |
.Dt | name-of-the-article 7 (7 for miscellaneous manual. see man man for explanation on the numbering) |
.Sh | NAME |
.Nm | name of the article |
.Nd | description of the article |
.Bd -literal
… .Ed
block.Em
anywhere in your text, it will vanish. Write \&Em
instead.Macro | Description |
---|---|
.Sh | New Section |
.Ss | New subsection |
.Bl | begin list. It can take the following parameters: -bullet, -item, -enum, -tag, -hang etc. |
.It | items inside .Bl and .El |
.El | end list |
.Pa | path for a filename |
.Bd | begin a display block, It takes a few parameters, most importantly -literal which is useful for source codes, spaced and tabbed text. |
.Dl | literal text of one line |
.Ed | end display block one line of literal text |
\fB | begins bold |
\fI | begins Italian |
\fR | ends bold and/or Italian |
.Em | emphasize (a line of bold, similar to \fB ... \fR ). If using inside an item of list, use without the dot, e.g. .It Em tagname |
Escape character \e
generates a \
, \&
replaces at the beginning of a sentence, but doesn’t generate a .
.
Type | Output |
---|---|
\e | \ |
\efB | \fB |
\&. | . |
\&Em | Em |
See the BSD manpage for mdoc
online. (Note: mdoc
for Arch is something else)
[link]
There are more things you can do with the manpages, such as adding custom files to a man database.
If you have imagemagick
installed (which you should), you can do many magical
things with images. Check out the main website and familiarize yourself with utilities that come with it (e.g. resize
, convert
etc.).
The command import filename.png
gives you a mouse pointer to click on the desired window to take a picture to filename.png
.
To capture the entire screen, do
import -window root filename.png
We can set up an automatic timestamp in the filenames so that they do not get overwritten (provided you take not more than one picture every second):
import -verbose -window root capture-$(date +%d-%h-%Y-%H-%M-%S).png
In this case,
sleep 5; import -window root file.png;
will be more effective.
Taking a screenshot of screen 0 (of DISPLAY
0)through some shell: If you log in through ssh
and want to capture a screenshot at that moment, issue:
import -window root -display :0.0 -screen capture-$(date +%d-%h-%Y-%H-%M-%S).png