Oskar Lidelson - Blog
Table of Contents
- Links
- Why Emacs?
- Why Lisp?
- Why Haskell?
- Migrate to ipv6-only everything
- What's so interesting about homomorphisms?
- Solving polynomial equations using homotopy continuation
- Sequentially generating permutations in C++
- Complex code is good code
- The unknown horrors of using mmap(2)
- You just can't win with compression
- How this website is built
- Interesting note: Discrete convultion is really just polynomial multiplication
- Record of Movies/TV-shows/Books/Anime I've Consumed
- Emacs Local Variables
Links
What | Where |
---|---|
CV | https://frostzero-cv-pdfs.s3.ca-central-1.amazonaws.com/oskar-lidelson.pdf |
https://www.linkedin.com/in/oskar-lidelson | |
GitLab | https://gitlab.com/sfreijken |
GitLab - code | https://gitlab.com/sea/public |
GitHub Profile | https://github.com/oskarlidelson |
Why Emacs?
Like an ancient wizard weaving his magics into an amulet , my emacs configuration has grown, and grown. Each year I've woven more of my power into it, to the point where emacs is now a part of me. It's not just an editor, it's literally a part of my mind.
A lisper accumulates every line of code they've ever written into a massive personal library, interconnected through org-mode with their music, media, books, links, diary, budget, documentation, website, repositories, … (and the list goes on for quite some time..)
It's not easy to describe this to the emacs uninitiated. Those unlucky people see editors as tools. Emacs is not a tool, it's a systematic way of life, a method of combining all things that I've ever done into a conglomerated tangle that can be reused integratively.
You either get it, or you don't.
Why Lisp?
A lot of my friends ask me why in the hell I use Lisp. They can't comprehend it. Most of them seem to be turned off by the parenthesis, actually, and at least one of them I'm certain is just complaining to be difficult, and has never actually given lisp any more than a moment's worth of thought (terrible excuse for a logical person he is!)
I use Lisp for my initial prototypes and ideas, and in general for things that don't need strong correctness guarantees or extreme speed.
Speed
First, it's important to note that computers get much faster year over year. For personal projects (and those with a reasonable number of users (ie. any small business), a single computer is now most probably powerful enough to serve as your entire infrastructure without serious problems. We now use multiple systems for redundancy and failover, not for performance reasons (at small business/team scales).
Thus, interpreted languages are plenty fast enough. Don't forget that for some ungodly reason the industry seems to like using node.js for backend. Lisp interpreters (SBCL especially) actually compile down to machine code, you can disassemble it and it's usually very similar to C code, look:
(defun fibonacci-unoptimized (n) (if (member n '( 0 1)) 1 (+ (fibonacci-unoptimized (- n 2)) (fibonacci-unoptimized (- n 1))))) (print (disassemble #'fibonacci-unoptimized))
; disassembly for FIBONACCI-UNOPTIMIZED ; Size: 143 bytes. Origin: #x539BB1B4 ; FIBONACCI-UNOPTIMIZED ; 1B4: 498B4510 MOV RAX, [R13+16] ; thread.binding-stack-pointer ; 1B8: 488945F0 MOV [RBP-16], RAX ; 1BC: 48837DE800 CMP QWORD PTR [RBP-24], 0 ; 1C1: 7508 JNE L2 ; 1C3: L0: BA02000000 MOV EDX, 2 ; 1C8: L1: C9 LEAVE ; 1C9: F8 CLC ; 1CA: C3 RET ; 1CB: L2: 48837DE802 CMP QWORD PTR [RBP-24], 2 ; 1D0: 74F1 JEQ L0 ; 1D2: 488B55E8 MOV RDX, [RBP-24] ; 1D6: BF04000000 MOV EDI, 4 ; 1DB: FF142550060050 CALL [#x50000650] ; #x52A00F80: GENERIC-- ; 1E2: 4883EC10 SUB RSP, 16 ; 1E6: B902000000 MOV ECX, 2 ; 1EB: 48892C24 MOV [RSP], RBP ; 1EF: 488BEC MOV RBP, RSP ; 1F2: B842CA3450 MOV EAX, #x5034CA42 ; #<FDEFN FIBONACCI-UNOPTIMIZED> ; 1F7: FFD0 CALL #S(SB-X86-64-ASM::REG :ID 0) ; 1F9: 480F42E3 CMOVB RSP, RBX ; 1FD: 488955E0 MOV [RBP-32], RDX ; 201: 488B55E8 MOV RDX, [RBP-24] ; 205: BF02000000 MOV EDI, 2 ; 20A: FF142550060050 CALL [#x50000650] ; #x52A00F80: GENERIC-- ; 211: 4883EC10 SUB RSP, 16 ; 215: B902000000 MOV ECX, 2 ; 21A: 48892C24 MOV [RSP], RBP ; 21E: 488BEC MOV RBP, RSP ; 221: B842CA3450 MOV EAX, #x5034CA42 ; #<FDEFN FIBONACCI-UNOPTIMIZED> ; 226: FFD0 CALL #S(SB-X86-64-ASM::REG :ID 0) ; 228: 480F42E3 CMOVB RSP, RBX ; 22C: 488BFA MOV RDI, RDX ; 22F: 488B55E0 MOV RDX, [RBP-32] ; 233: FF142548060050 CALL [#x50000648] ; #x52A00F10: GENERIC-+ ; 23A: EB8C JMP L1 ; 23C: CC10 INT3 16 ; Invalid argument count trap ; 23E: CC14 INT3 20 ; UNDEFINED-FUN-ERROR ; 240: 00 BYTE #X00 ; RAX(d) ; 241: CC10 INT3 16 ; Invalid argument count trap NIL
I also want to point out that, as anyone who's done leetcode (urgh! That deserves its own post!) knows, speed is actually more about the cleverness of your algorithm, not about your language (with the exception of special case languages like brainfuck or unlambda!)
Lisp is such an elegant, powerful, expressive language that you're able to use the more powerful and elegant abstract trickery to write the faster algorithms much more easily, so in general your code should usually be a lot faster than the fast-but-infantile code a C developer writes, unless that C developer has a month to optimie the code thoroughly. (C++ developers will need much less, since that's quite a powerful language now!)
Environment
Emacs and SLIME with paredit and so on. Note: My emacs config.
Enough has been said on the internet about Emacs and its integration with lisp. It is, simply, the single most powerful programming environment known to man (for lisp), and that remains undisputed.
Macros
People who code in languages without real macros will not be able to comprehend them, let alone begin to understand their glory. They simply don't have the mental framework needed. It would be like explaining to a geometer that there exists a connection between geometry and algebra, and if they could only study it a little, they'd get it.
It sufficies to say that all code is similar, there are only a few ideas that you reuse. Wherever there is redundancy in the code, and I really do mean wherever, you can write code to generate that code and eliminate it. That code-generating code is itself generatable, to the point where an obsessive lisper can simplify the codebase to a horrible mega-abstracted structure which probably is glorious and powerful enough to become its own branch of mathematics. (In fact, this is how new branches of mathematics are made! Abstracting repeatedly and distilling out common compoments until..)
Why Haskell?
I spent some time on The Science of Programming by Gries.
When I don't need the sheer power of Lisp, or the sheer speed of C++, I use Haskell for the sheer strictness and correctness.
It feels great to be able to reason (as well as I can) about the correctness of my code. It still has bugs, but when using haskell I try to be in a bourbaki-like ultra-rigorous mindset. If it compiles, it is probably correct.
This is a good mindset to have sometimes and I like that I get the opportunity now and again to exercise it. This is why I write in haskell sometimes.
Migrate to ipv6-only everything
Ipv6 is beautiful. The address space is huge, everyone gets a massive block to subdivide as they like (unless your ISP hands you only a single /64), and the stateless autoconfiguration (SLAAC) mechanisms are absolutely gorgeous: Nothing ever needs to be manually configured.
From the perspective of distributed systems, the router advertisements, RDNSS, and NDP messages are very cool. The ICMP messages for fragmentation size calculation are neat! Everything talks to everything else and autoconfigures itself dynamically constantly. Nothing is set once, everything is constantly being re-set and expiring so the system as a whole is self healing.
IPv6 is global, and encourages you to set up your firewall appropriately, since you can no longer rely on NAT to 'protect' you. That, and it means that anything can (in principle) talk to anything else. You'll never again be locked out of ssh'ing into your system because your stupid ISP was in the way. You'll never again be locked out of anything. As long as you know the address and have the firewall rules configured, you can connect.
The IPv6 address spaces are so enormous that even if you knew my prefix, you simply could not scan it for hosts, no matter how much bandwidth you had available. This means that there should be a massive reduction in all the random scans and attacks I receive every day on the internet. The network is finally quiet and clean, peaceful.
I use IPv6-only because IPv4 is broken. NAT broke it and the address blocks are exhausted. The protocol is, as far as I'm concerned, deprecated and obsolete, and should not be used at all. Don't write new code to interoperate with IPv4. Just write pure IPv6 code, and build tunnels to allow you to bypass incompetent ISPs if you must.
What's so interesting about homomorphisms?
Applying a homomorphism to a true statement within an algebraic structure results in another true statement in the destination algebraic structure.
For instance, consider this:
Let \(P(x)\) denote some polynomial with coefficients taken from the reals. ie. \(P(x)\) is in \(R[x](+,*)\).
More importantly, \(P(x)\) is also in \(C[x](+,*)\) ie. the field of complex numbers.
Observe that the map \((f(x):C->C = x conjugate)\) is a homomorphism from \(C\) to \(C\). This is straightforward to verify, just check that \(f(x)+f(y) = f(x+y)\) and that \(f(x)f(y) = f(xy)\).
A homomorphism applied to a true statement in a field results in a true statement in the other field, therefore, if \(f(x) = 0\), then \(P(f(x))\) vanishes, too. This implies that roots of polynomials with real coefficients come in pairs; if they're complex: \((x, f(x))\), and if they're real, just \(x\) (because in that case \(x = f(x)\)).
So here you see that you can use homomorphisms to prove some really interesting things very concisely! They're also incredibly useful for computation, and making spiffy algorithms that map things into a smaller, easier to work in space, operate on them, and then use the results there as a heuristic to search for the true results in the larger space.
Going further
Going further than merely homomorphisms to just general 'morphisms' (hints of category theory here), you can actually study some really cool stuff. In group theory, there's a horrily long and complex proof that a solvable group correspond to specific kinds of field extensions (finite, separable ones), and a second proof that it's possible to solve by radicals if and only if you can extend the field step-by-step in a certain way. The absence of this correspondence allows you to show that a general quintic equation (whose galois group is not solvable) therefore cannot correspond to such a field extension, and thus, these polynomials are not (in general) solvable by radicals.
The proof is nightmarish, can only be understood slowly and in pieces, and I don't believe that I could ever explain it to anyone.
Going even further
It's fun to collect morphisms between distant objects. Find two objects, and see if there's a way to turn one into the other, or at least to relate one to sets of the other. It's actually incredibly fun! You can build a library of 'views' into things. For example, polynomials (via Viete's formulas) correpond to systems of equations (the system of equations that define the coefficients!).
ToDo: Attach my morphism map here.
Solving polynomial equations using homotopy continuation
Sequentially generating permutations in C++
There are some algorithms where you'd like to iterate over all permutations of n objects. Here's some C++ code that will generate a list of them for you. It can be modified in a straightforward way to iterate instead, perhaps altered to accept a lambda function to apply to each one on generation.
#include <iostream> #include <list> #include <algorithm> template<typename type_t> std::list<std::list<type_t> > permutations ( std::list<type_t> set ) { typedef std::list<type_t> oset_t; typedef std::list<oset_t > oset_oset_t; oset_oset_t result_set; if(set.size() == 1) { result_set.push_back(set); return result_set; } else { for(auto i=set.begin();i!=set.end();i++) { std::list<type_t> rest_of_list; std::copy(set.begin(),set.end(),std::back_inserter(rest_of_list)); rest_of_list.remove((*i)); std::list<std::list<type_t> > partial_result = permutations<type_t>(rest_of_list); for(auto p = partial_result.begin();p!=partial_result.end();p++) { p->push_front(*i); } std::copy(partial_result.begin(),partial_result.end(),std::back_inserter(result_set)); } return result_set; }} int main (){ std::list<int> S; for(int i=0;i<6;i++) { S.push_back(i); } std::list<std::list<int> > P = permutations<int>(S); for(auto i = P.begin();i!=P.end();i++) { for(auto j = i->begin();j!=i->end();j++) { std::cout<<(*j); } std::cout<<std::endl; } }
Complex code is good code
Readable, simple code should better be called infantile. It's a good first step, but it's incomplete.
You need to write complex, difficult to understand code in order to get anything done properly. Consider the most performant algorithms available for known problems: FFT integer multiplication, group-theoretic alarmingly complex factorization algorihtms, the mind-twisting spatial data structures with various different locality or operation time guarantees, and I don't even want to get into the complexity of caching or distributed algorithms.
If your code is readable, it is likely also unoptimized. Programmers are not meant to write slow code any more than engineers are meant to build flimsy bridges. If it takes longer to write, so be it. The facts are, we either do it properly or we don't do it at all. I don't accept half-built bridges in the name of development speed.
Nobody builds a skyscraper and says "Hang on, the engineers might not be that smart. Let's use the dumbed-down building techniques and hope it stays up, so that they can have an easier time of it.".
What really happens is that they use the most advanced techniques they have available, and if the other engineers aren't up to snuff, they are in the wrong line of work! It's not my job to dumb down my code for lesser engineers. It's their job to train harder and get themselves to the level where they can work on faster code.
Simple code has its place for educational purposes, for the initial proof-of-concept version, and so forth, but when it comes down to production code it's going to be one unholy abomination because of all the layers of caching, abstraction, extendability, indirection and so forth you have to built into it for it to be well engineered.
Consider the following implementations of strstr to get an idea of what I'm saying:
Prolog Implementation
strstr([],_,_,_,_,Result) -> Result; %Single-character matches require a special handler. strstr([H|T],TargetString,[H|[]],0,C,Result) -> strstr(T,TargetString,TargetString,0,C+1,[{C,C}|Result]); %The match completed successfully. strstr([H|T],TargetString,[H|[]],Where,C,Result) -> strstr(T,TargetString,TargetString,0,C+1,[{Where,C}|Result]); %Clause needed for the initial character match to update 'where'. strstr([H|T],TargetString,[H|TST],0,C,Result) -> strstr(T,TargetString,TST,C,C+1,Result); %Match and continue to match. strstr([H|T],TargetString,[H|TST],Where,C,Result) -> strstr(T,TargetString,TST,Where,C+1,Result); %No match, continue searching and reset parameters. strstr([_|T],TargetString,_,_,C,Result) -> strstr(T,TargetString,TargetString,0,C+1,Result). %Initial call. This is exported to the user. strstr(SearchStr,Target) -> strstr(SearchStr,Target,Target,0,0,[]).
C++ Implementation
typedef unsigned int uint; std::list<uint> strstr_cxx ( std::string s, std::string target ) { uint match_len, initial_pos, j; std::list<uint> result; match_len = target.length(); for(uint i=0;i < s.length();i++) { if(s[i] == target[j]) { if(j == 0) initial_pos = i; j++; if(j == target.length()) { j = 0; result.push_back(initial_pos); }} else j = 0; } return result; }
C Implementation
#define uint unsigned int void strstr_c ( const char * str, const char * target, uint * results, uint nresults ) { const char * strpend = str + strlen(str), * tpend = target + strlen(target); char * strp,* tp,* beginning = NULL; uint * end_results = results + nresults; for(strp = str, tp = target;strp != strpend;strp++) { if(*strp == *tp) { if(!beginning) beginning = strp; tp++; if(tp == tpend) { if(results != end_results) *results++ = beginning - str; tp = target; } } else { beginning = NULL; tp = target; }}}
Common Lisp implementation
(defun ndfa->func (ndfa) "ndfa given as: (STATE-DESCRIPTORS). STATE-DESCRIPTOR: (:id ID :TERMINAL BOOL :TRANSITIONS (TRANSITION-DESCRIPTORS)) TRANSITION-DESCRIPTOR: (:input input :destination ID) I enforce the rule that ID ranges from 0 to the upper bound, without gaps." (let ((input-position -1) (ndfa-states nil) (ndfa-ptrs nil)) ;;Populate ndfa-states: (dolist (s ndfa) (setf (getf ndfa-states (getf s :id)) (copy-tree s))) ;;Func: (lambda (input) (incf input-position) ;;create a new NDFA ptr. (push (list :input-position input-position :state-id 0 :inputs nil) ndfa-ptrs) (dolist (p ndfa-ptrs) (let ((associated-state (getf ndfa-states (getf p :state-id)))) (dolist (transition-i (getf associated-state :transitions)) (let ((expected-input (getf transition-i :input)) (destination-id (getf transition-i :destination))) (if (or (null expected-input) (equal expected-input input)) ;;Create a new NDFA ptr for each matching input: (push (list :input-position input-position :state-id destination-id :inputs (cons input (getf p :inputs))) ndfa-ptrs))))) ;;Delete this ndfa ptr. (setf ndfa-ptrs (remove p ndfa-ptrs :test #'eq))) ;;If any of them hit a terminal state, return their data: (let ((result nil)) (dolist (p ndfa-ptrs) (let ((this-state (getf ndfa-states (getf p :state-id)))) (when (getf this-state :terminal) ;;input-position is the last input position that led us to a terminator. ;;Subtract the length of the inputs to get the input position that started this path. (push (+ 1 (- (getf p :input-position) (length (getf p :inputs)))) result)))) (values result ndfa-ptrs))))) (defun strstr-seq (target-seq search-seq) (let ((ndfa nil)) (loop for ci in (coerce target-seq 'list) for id from 0 do (push (list :id id :terminal nil :transitions (list (list :input ci :destination (+ id 1)))) ndfa) finally (push (list :id (+ 1 id) :terminal T :transitions nil) ndfa)) (let ((f (ndfa->func ndfa)) (results nil)) (loop for i in (coerce search-seq 'list) do (let ((result (funcall f i))) (when result (push result results)))) results)))
The lisp version is massively more complex, but it's clear that it's also enormously more powerful, can operate on general sequences of arbitrary types, can return multiple occurrences of a subsequence, even overlapping ones, and is already in a position where it can accept infinitely-long inputs.
The lisp version is also unoptimized. It would be more complex still if I had included a small compiler to convert the NDFAs from a nested list format to an array of binary trees. The array holds the state descriptors, with constant lookup time, plus the binary trees which store transition entries by symbol, allowing log time (of the alphabet size) lookup of the next transition.
It would have been faster as well to discard the entire pointer list at each iteration and generate the new ones from scratch, rather than iteratively delete and regenerate the list.
I'm not particularly skilled at sequence algorithms. I bet there's a horribly overcomplicated even better algorithm than this that took some PhD student a few months to figure out, and which can only be found buried in the depths of the most arcane paper..
The unknown horrors of using mmap(2)
It's probably happened to you before. You're running some simulation program on your GNU/linux system, and suddenly, you realize there was an error in your code; it would allocate far too much memory.
You foresaw issues like this and thus disabled swap. Modern systems don't really need swap space, anyway, unless you've got an SSD in which case it's still fast enough to be useful. However, even with swap disabled, you see kswapd in your process list consuming CPU cycles, and the whole system grinds to a halt. How? Why?
As it turns out, kswapd doesn't just work on swap space. It also swaps out mmap'd pages. Now, mmap is one of the best tools available to a linux programmer because it allows you to operate on large files with essentially transparent caching, and in the case where your accesses have great locality, you'll get massive speedups plus the security of fitting huge datasets into memory even if you don't have enough, knowing that it will only ever swap in the pieces you're currently viewing.
The fact that mmap is so awesome means that it's in use everywhere. That, of course, is a double edged sword. If your system is under memory pressure, (ie. some program is using up almost all of the available memory) then kswapd will come into play and begin paging out those mmapd regions. The problem, though, is that those mmap'd pages are, by design, for processes making active use of those files!
Thus, every mmap'd page that gets swapped out will be needed again, and soon. Your system grinds to a halt, attempting to swap the pages in and out endlessly. Even with swap space entirely disabled, your system will thrash purely because of the popularity of mmap.
Checkmate. Linux under memory pressure has been beaten by its own efficient system features. The only real solution is to use ulimit to set hard memory limits, and prevent your rogue simulator from exerting memory pressure in the first place.
At the time I've written this, there hasn't been any real progress toward a solution, ie. the ability to mark individual processes as 'unswappable no matter what', which would allow you to mark the GUI-related processes and, even under intense load, retain some control over your system; just enough to summon a terminal and issue the kill commands for your rogue processes.
As it stands now, once the thrashing begins you can't even summon the OOM-killer with the sysrq key. You can, but it will take minutes, perhaps hours, before the kernel is able to rise from its stupor to honor the OOM request.
You just can't win with compression
You can use information theory to prove this, but compression is actually related to permutations. The set of all strings in your language, you permute. Some of them become representable in shorter strings, but others necessarily become longer strings. In total, the sum of information remains unchanged (information is conserved).
Good compression functions are those which map interesting inputs to shorter strings, while uninteresting inputs become longer ones.
So for instance, you'd want an algorithm which encodes written text and all human-created interesting media to compress well, but randomized strings of gibberish to become much longer under 'compression', because we don't care about those.
How this website is built
This site is actually written and managed in emacs org-mode, compiled to html in emacs, and then pushed to a static-serving bucket.
Some of the assets were manually created (the CSS, etc.)
CICD pipeline
On push, a pipeline job runs that builds it, and the resulting html (and other) files are pushed to a bucket.
The CICD pipeline has an aws key which only has permission to push to that one bucket.
Infrastructure Description
In AWS, there's a load balancer with an amazon-provided SSL certificate for the domain, that points to the static-site setup on that bucket.
The actual repo
This site is in the repo here
The repo itself is a terraform module which instantiates the bucket and iam key. That module knows about its own repo, and thus it can use the gitlab provider from the caller to inject the key into its own protected variable configuration so that the pipelines can run.
Some of the code itself!
This makes this website somewhat of a quine! The code to build it is in itself!
terraform { required_providers { aws = { source = "hashicorp/aws" configuration_aliases = [aws, aws.us-east-1 ] } } } variable "domain" {} # Actual domain module object. variable "domain-name-full" {} # ToDo: required_providers aws, gitlab resource "aws_s3_bucket" "blog" { bucket = "${var.domain-name-full}-blog" } resource "aws_s3_bucket_website_configuration" "blog" { bucket = aws_s3_bucket.blog.id index_document { suffix = "index.html" } error_document { key = "error.html" } } output "http-domain" { value = aws_s3_bucket_website_configuration.blog.website_domain }
Of course we want to ensure that the load balancer can hit the bucket and grab the site. If this were excellent code, only the load balancer would have permission to do this, but it's good enough to have it simply be public:
resource "aws_s3_bucket_public_access_block" "enable-public-access" { bucket = aws_s3_bucket.blog.id block_public_acls = false block_public_policy = false ignore_public_acls = false restrict_public_buckets = false }
Now a policy to grant access to that bucket's objects:
resource "aws_s3_bucket_policy" "allow-access-all" { bucket = aws_s3_bucket.blog.id policy = data.aws_iam_policy_document.allow-access-all.json } data "aws_iam_policy_document" "allow-access-all" { statement { principals { type = "AWS" identifiers = ["*"] } actions = ["s3:GetObject", "s3:ListBucket"] resources = [ aws_s3_bucket.blog.arn, "${aws_s3_bucket.blog.arn}/*" ] } }
Actually inject the code. I'll use the code generated in the 'output' directory (where I'll have org-mode dump its outputs)
module "s3-bucket-upload" { source = "git@gitlab.com:sea/public/tf-modules/aws/s3-directory-upload.git" bucket = aws_s3_bucket.blog.id directory_path = "${path.module}/output" }
S3 static sites only generate HTTP endpoints. To make the site HTTPS accessible, we'll need to generate a certificate for the domain first.
Luckily I can reuse one of my other modules for this.
module "cert-for-domain" { source = "git@gitlab.com:sea/public/tf-modules/aws/domain-validated-cert.git" domain = var.domain domain_name = var.domain-name-full providers = { aws = aws.us-east-1 } }
Now we setup a cloudfront distribution that has the bucket as its backend.
I would prefer to link to an ELB, but ELBs can't link to S3 backend naturally.
resource "aws_cloudfront_distribution" "www" { aliases = [var.domain-name-full] origin { domain_name = aws_s3_bucket_website_configuration.blog.website_endpoint origin_id = "s3-www.${aws_s3_bucket.blog.id}" custom_origin_config { http_port = 80 https_port = 443 origin_protocol_policy = "http-only" # S3 does NOT support https on bucket websites origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } } enabled = true is_ipv6_enabled = true default_root_object = "index.html" restrictions { geo_restriction { restriction_type = "none" } } default_cache_behavior { allowed_methods = ["GET", "HEAD"] cached_methods = ["GET", "HEAD"] target_origin_id = "s3-www.${aws_s3_bucket.blog.id}" forwarded_values { query_string = false cookies { forward = "none" } } viewer_protocol_policy = "redirect-to-https" min_ttl = 37 default_ttl = 37 max_ttl = 37 compress = true } viewer_certificate { acm_certificate_arn = module.cert-for-domain.cert.arn ssl_support_method = "sni-only" minimum_protocol_version = "TLSv1.2_2021" } }
Finally, you want a CNAME record which sends you from the real domain, to that.
module "route53-alias" { source = "git@gitlab.com:sea/public/tf-modules/aws/route53-alias-pair.git" zone-id = var.domain.zone_id name = "www" alias-dns-name = aws_cloudfront_distribution.www.domain_name alias-dns-zone = aws_cloudfront_distribution.www.hosted_zone_id }
Interesting note: Discrete convultion is really just polynomial multiplication
If \(F(i)\) is the discrete convolution of \(f(x)\) and \(g(x)\), with \(f\) and \(g\) as two polynomials in \(R[x]\), then \(F(i)\) is the ith coefficient of \(x\) in \(f*g\).
ToDo: Use this for something.
Record of Movies/TV-shows/Books/Anime I've Consumed
This is managed in git here: https://gitlab.com/sea/public/diary/media
Books
2014
Title | Author |
---|---|
WWW - Wake | Robert J. Sawyer |
WWW - Watch | Robert J. Sawyer |
WWW - Wonder | Robert J. Sawyer |
Beyond Heaven's River | Greg Bear |
The Strain Triology - The Strain | Guillermo del Toro, Chuck Hogan |
Orphans of the Sky | Robert A. Heinlein |
Brain Wave | Poul Anderson |
The Last Unicorn | Peter S. Beagle |
Backwoods | Sara Reinke |
The Dead Boys | Jonathan Curwen |
Dark Aeons | Z.M. Wilmot |
The Science of Fear | |
Solaris | Stanislaw Lem |
In the Shadows | Rebecca Rogers |
The Cyberiad | Stanislaw Lem |
Dastardly Deeds of Horror and Mayhem | D.K. Ryan |
Cosmos | Carl Sagan |
The Frail Ocean | Wesley Marx |
The Earth and its Oceans | Duxbury |
The Ages of Gaia | James Lovelock |
The Biology of Plants | |
The Wold of Night | |
Methods of Logic | Quine |
Principles of Mathematical Logic | Hilbert |
A First Course in Formal Language Theory | |
The Best Mysteries of Isaac Asimov | Isaac Asimov |
Structural Introduction to Chemistry | Harris E.T. |
Emergence | Stephen Johnson |
The Strain Trilogy - The Fall | Guillermo del Toro, Chuck Hogan |
Chronicles of the Necromancer - The Summoner | Gail Z Martin |
The Time Machine and the Invisible Man | H.G. Wells |
Binary | Michael Crichton |
Flashforward | Robert J. Sawyer |
A Canticle for Leibowitz | Walter M. Miller, Jr. |
Ringworld | Larry Niven |
The Strain Triology - The Night Eternal | Guillermo del Toro, Chuck Hogan |
Pandemonium | Warren Fahy |
Time's Eye | Arthur C. Clarke, Stephen Baxter |
State of Fear | Michael Crichton |
Scientific Paranormal Investigation | Benjamin Radford |
Chronicles of the Necromancer - The Blood King | Gail Z Martin |
Peace of Earth | Stanislaw Lem |
Complexity: A Guided Tour | Melanie Mitchell |
The Old Man and the Sea | Ernest Hemingway |
Cause of Death | Patricia Cornwell |
Why does E = MC2? | Brian Cox |
A History of Science - Volume 1 | Edward Huntington |
2015
Title | Author |
---|---|
The Elegant Universe | Brian Greene |
Prey | Michael Crichton |
The Standard Model, The Unsung Triumph of Modern Physics | Roert Oerter |
Chaos | James Gleick |
The 37th Mandala | Marc Laidlaw |
Our Mathematical Universe | Max Tegmark |
The Book of Nothing | John D. Barrow |
Seize the Night | Dean Koontz |
Calculating God | Robert J. Sawyer |
Micro | Michael Crichton |
Snow Crash | Neal Stephenson |
Consider Phlebas | Iain M. Banks |
The Ringworld Engineers | Larry Niven |
The Ringworld Throne | Larry Niven |
Jurassic Park | Michael Crichton |
The Beginning of Infinity | |
Paranormality | |
The Story of Phi | |
By the Light of the Moon | Dean Koontz |
The Lost World | Michael Crichton |
The Wave | Susan Casey |
The Dark Half | Stephen King |
Night Shift | Stephen King |
Signal to Noise | Eric S. Nylund |
Darwin's Children | Greg Bear |
The Taking | Dean Koontz |
Wizard at Large | Terry Brooks |
Frankenstein | Mary Shelley |
The Edge of Physics | |
Blind Descent | Nevada Barr |
Against a Dark Background | Iain M. Banks |
The Right Hand of Evil | John Saul |
Fear Nothing | Dean Koontz |
Everything's Eventual | Stephen King |
2016
Title | Author |
---|---|
The Linux Security HOWTO | |
The Deep | Nick Cutter |
The Circle | Dave Eggers |
Axiomatic | Greg Egan |
Lectures on Elementary Mathematics | Lagrange |
Luminous | Greg Egan |
Stories of Your Life and Others | Ted Chiang |
Intensity | Dean Koontz |
Dreamcatcher | Stephen King |
Asimov's Magazine July 2016 | |
Harry Potter and the Cursed Child | |
ANALOG SF&F magazine July/August 2016 | |
Like Death | Tim Waggoner |
The Brain - The Last Frontier | Richard M. Restak |
Annihilation | Jeff Vendermeer |
Symmetry and the Monster | Mark Ronan |
Others | James Herbert |
Singularity Sky | Charles Stross |
2017
Title | Author |
---|---|
A Universe From Nothing | Lawrence Krauss |
Time Odyssey - Sunstorm | Arthur C. Clarke, Stephen Baxter |
Creative Thinkering | Michael Michalko |
The Wardstone Chronicles - Revenge of the Witch | |
Nudge | Richard H. Thaler, Cass R. Sunstein |
The Supernaturalists | Eoin Colfer |
Ghost From the Grand Banks | Arthur C. Clarke |
Demon Seed | Dean Koontz |
2018
Title | Author |
---|---|
Fragment | Warren Fahy |
Sphere | Michael Crichton |
Influx | Daniel Suarez |
Timescape | Gregory Benford |
End of an Era | Robert J. Sawyer |
The Andromeda Strain | Michael Crichton |
Hominids | Robert J. Sawyer |
Humans | Robert J. Sawyer |
Rendezvous with Rama | Arthur C. Clarke |
2001: A Space Odyssey | Arthur C. Clarke |
Accelerando | Charles Stross |
Foundation | Isaac Asimov |
Numbers Rule Your World | Kaiser Fung |
2019
Title | Author |
---|---|
Frameshift | Robert J. Sawyer |
Mindscan | Robert J. Sawyer |
Battle Angel Alita - Official Movie Novelization | Pat Cadigan |
2020
Title | Author |
---|---|
The Currents of Space | Isaac Asimov |
Glasshouse | Charles Stross |
2021
Title | Author |
---|---|
Coalescent | Stephen Baxter |
Exultant | Stephen Baxter |
A Different Universe | Robert B. Laughlin |
The New Silk Roads | Peter Frankopan |
The Three Body Problem | Cixin Liu |
The Dark Forest | Cixin Liu |
2022
Title | Author |
---|---|
Spellslinger | Sebastien de Castell |
The Madness of Crowds: Gender, Race, and Identity | Douglas Murray |
Until the End of Time | Brian Greene |
The Hollow Ones | Guillermo Del Toro, Chuck Hogan |
Breaking Boundaries - The Science of Our Planet | Johan Rockstrom, Owen Gaffney |
The Animal Manifesto: Six Reasons for Expanding Our Compassion Footprint | Marc Bekoff |
Feminism: A Very Short Introduction | |
Analytic Philosophy: A Very Short Introduction | |
The End of Animal Farming | Jacy Reese |
Logic - A Very Short Introduction | Graham Priest |
Environmental Ethics - A Very Short Introduction | Robin Attfield |
2023
Title | Author |
---|---|
Denialism: How Irrational Thinking Hinders Scientific Progress | Michael Specter |
BlindSight | Peter Watts |
The Gods Themselves | Isaac Asimov |
Silent Earth - Averting the Insect Apocalypse | Dave Goulson |
BLAME! | |
Once Upon a Time We Ate Animals | Roanne Van Voorst |
Going Postal | Terry Pratchett |
The Extended Mind - The Power of Thinking Outside the Brain | Annie Murphy Paul |
Symmetry | Marcus Du Sautoy |
Myst - The Book of Atrus | |
Let the Right One In | John Ajvide Lindqvist |
Rollback | Robert J. Sawyer |
Leech | Hiron Ennes |
The World Inside | Robert Silverberg |
Camouflage | Joe Haldeman |
2024
Title | Author |
---|---|
Darkfall | Dean Koontz |
Animal Liberation Now | Peter Singer |
Ship of Magic | Robin Hobb |
Love and Math | Edward Frenkel |
The Joy of Less, A Minimalist Living Guide | Francine Jay |
Casino Royale | Ian Fleming |
Good Omens | Terry Pratchett, Neil Gaiman |
Less is More | Jason Hickel |
The Reality Bubble | Ziya Tong |
The Troop | Nick Cutter |
It's Elemental: The Hidden Chemistry in Everything | Kate Biberdorf |
The Haunting of Hill House | Shirley Jackson |
2025
Title | Author |
Anime
Anime Seen
Title | Comment |
---|---|
Future Diary (Mirai Nikki) | |
Deadman Wonderland | Mockingbird is cute! |
Another | |
The Familiar of Zero | Cozy |
Ghost Hunt | Cute Slow Romance! |
Blood-C | Surprising twist |
Death Note | After ep 12 it's rubbish |
Steins;Gate | Cute Romance! |
Shinsekai Yori | Spooky Dystopia! |
Dance in the Vampire Bund | |
Sword Art Online | Kirito is AWESOME! |
Sword Art Online 2 | Sinon is Sexy AF |
Haiyore! Nyarko-San! | Endless References - Hilarious |
.hack//SIGN | Slow, but interesting |
Accel World | Pretty animation |
Attack on Titan | SLOW |
Shiki | Terrible Art Style |
Date A Live | Hot Girls |
Elfen Lied | Incredibly Sad |
Ergo Proxy | Dark, Surprising Twist |
The Melancholy of Haruhi Suzumiya | Cozy |
Tokko | |
Noragami | Cozy |
Ghost Hound | Weird |
Tokyo Ghoul | Will this main character ever shut up? |
Young Justice | Eh |
Justice League | |
Justice League Unlimited | |
Owari No Seraph | |
Charlotte | Epic Romance! |
Riddle Story of Devil (Akuma No Riddle) | |
Shingeki No Bahamut - Genesis | EPIC Art and Animation |
Brotherhood - Final Fantasy XV | Felt like crying |
Zombie Loan | |
Knights of Sidonia | Super Cool Scifi |
Goblin Slayer | ELF GIIIRLLLL |
Heaven's Official Blessing | Romance, YES |
The Orbital Children | |
Irina - The Vampire Cosmonaut | |
Wandering Witch: The Journey of Elaina | |
Overlord | |
A.I.C.O. Incarnation |
Documentaries
Documentaries Seen
Title |
---|
America's Stone Age Explorers |
Ancient Inventions |
Ape Man |
Before We Ruled the Earth |
Cave of Forgotten Dreams |
Chemistry: A Volatile History |
Clash of the Cavemen |
Connections1 |
Connections2 |
Connections3 |
DNA Mysteries: The Search for Adam |
Journey of Man |
Lascaux - The Prehistory of Art |
Legacy - The Origins of Civilization |
Living in the Past |
Lost Tribe of Palau |
Magnetism |
Monsters We Met |
Mysteries of Mankind |
Neanderthal (Discovery) |
Neanderthal (Horizon) |
Neanderthals on Trial |
Origins of Us |
Planet of the Apemen: Battle for Earth |
Prehistoric Americans |
Quest for the Phoenicians |
Riddle of the Human Hobbits |
Skull Wars: The Missing Link |
Stone Age Atlantis |
Stone Age Columbus |
Stories from the Stone Age |
Teds Evolution |
The Ape that Took Over the World |
The Ascent of Man |
The Day the Universe Changed |
The Day we Learned to Think |
The Great Leap Forward |
The Incredible Human Journey |
The Lapedo Child |
The Mystery of the Human Hobbit |
The Real Eve |
The Story of Science |
The Tribal Eye |
Tutankhamuns Fireball |
Walking with Cavemen |
What is Human |
TV Shows
TV Shows Seen
Title | Where |
---|---|
Arcane | Netflix |
Behind Her Eyes | Netflix |
Captain LaserHawk | Netflix |
Cyberpunk Edgerunners | Netflix |
Dahmer | Netflix |
Elves | Netflix |
Inside Job | Netflix |
Loki | Disney+ |
Pantheon | |
Resident Evil | Netflix |
Rick and Morty | TV |
Secret Level | Prime |
Sex Education | Netflix |
South Park | TV |
Stranger Things | Netflix |
Tear Along the Dotted Line | Netflix |
The Dropout | Netflix |
The End of the Fkin World | Netflix |
The Flash | Netflix |
The Great | HBO |
The Handmaid's Tale | |
The Mist | Netflix |
The Mysterious Benedict Society | |
The Sandman | Netflix |
The Simpsons | TV |
The Stranger | Netflix |
The X-Files | |
Wednesday | Netflix |
WestWorld | HBO |
What If | Disney+ |
Emacs Local Variables
This is just to fold/hide the section at the bottom with my emacs local variables for this file:
If you're curious, this is what's included:
(add-hook 'after-save-hook (lambda () (if (y-or-n-p "Tangle?") (org-babel-tangle))) nil t)