Posts
-
May 12, 2017
In this video at 57:58, Michael Jordan says that no one teaches
causal inference. So he explains it to us in 30 seconds.
Read more...
-
May 3, 2017
Recently, youtube has released this horrendous feature: Seriously? Words cannot describe my feelings about this Luckily, you can disable this in a few simple steps: Go to the chrome web store and search for ublock origin Install it by clicking the Add to chrome button Once installed, go to the settings of the extension by...
Read more...
-
Mar 2, 2017
In this talk: The Long-Term Future of (Artificial) Intelligence, Stuart Russel explains the situation with the Comprehensive Test-Ban Treaty. (From 16m40s to 22:54) The United States that proposed the treaty still has not ratified it, supposedly because detection mechanisms are not good enough to detect other nations from cheating. This has motivated the UN to...
Read more...
-
Feb 19, 2017
The BENEFICIAL AI 2017 conference has gathered an impressive panel of thinkers, including Yoshua Bengio, Stuart Russell, Andrew McAfee, Demis Hassabis, Ray Kurzweil, Yann LeCun, Shane Legg, Nick Bostrom, Jaan Tallinn (to name just a few) The technical talks that I found most interesting are the following: Interactions between the AI Control Problem and the...
Read more...
-
Feb 18, 2017
Scott Phoenix, cofounder of vicarious, talks about the “Kiss of death of AI” in this interview: Vicarious’ Scott Phoenix on AI & race to unlock human brain to create AGI, our last invention (link starts at 21m50s) [Scott] We’re a bunch of phd’s in a room. We’re solving fundamental research problems on how to build...
Read more...
-
Feb 13, 2017
Nick Bostrom has this interesting idea: the “vulnerable world hypothesis”
the idea that localized offensive technologies can dominate over large scale defensive ones
(source)
More details in this talk:
Interactions between the AI Control Problem and the Governance Problem
Read more...
-
Feb 12, 2017
Andrej Karpathy wrote this landmark post about the Unreasonable effectiveness of RNNs. Generative Adversarial Networks, or GANs for short, also seem more powerful than they should be. This technique feels gimmicky: we create a generator and a discriminator network, and then we have them fool one another… And yet, the results are so great, they...
Read more...
-
Feb 9, 2017
In this great post: What sort of thing a brain is, Nate Soares makes a case for the need for human rationality, but I think that his view about the brain as a mutual-information machine is even more profound: A brain is a specialty device that, when slammed against its surroundings in a particular way,...
Read more...
-
Dec 11, 2016
Andrej Karpathy reviews some convnet architectures in this lecture: CS231n Winter 2016 Lecture 7 Convolutional Neural Networks At 45:51, he starts reviewing different types of convnets: 45:59 LeNet-5 46:58 AlexNet 54:20 ZFNet 57:17 VGGNet 1:01:58 GoogLeNet 1:05:09 ResNet. This is the one architecture that is gradient-flow friendly. In a following lecture, he explains LSTMs: CS231n...
Read more...
-
Dec 9, 2016
Nancy Kanwisher talks about the Visual word form area in this talk: What do you mean, “Just Phrenology? (starting at 16:45)” and at 18:27, she explains her project: The idea would be to scan infants pretty much from birth to get maps of connectivity of the brain. And then you scan them later and get...
Read more...
-
Nov 13, 2016
You can view the stanford course about convnets on youtube:
CS231n : Convolutional Neural Networks for Visual Recognition
It is taught by Andrej Karpathy and Justin Johnson.
Definitely one of the best courses about deep learning. I cannot believe it is free and so few people have
watched it.
Read more...
-
Nov 13, 2016
Amund Tveit has a nice list about ICLR 2017 papers about unsupervised deep learning He also references this lecture by Salakhutdinov: Foundations of Unsupervised Deep Learning (Ruslan Salakhutdinov, CMU) I thought I was following somewhat closely this part of the field. Now I feel like I have been living under a rock. Stuff that was...
Read more...
-
Nov 7, 2016
This post title is quite a mouthful. I haven’t written in a while, so I have quite a bit to dump out. I’ll just throw some seemingly unrelated ideas, and see what happens. - Tenenbaum’s Blessing of Abstraction [The] abstract layer of knowledge can serve a role as inductive bias even when the abstract knowledge...
Read more...
-
Aug 28, 2016
In Intuitive Theories as Grammars for Causal Inference, Josh Tenenbaum et al give a mathematical formalization of their hierarchical Bayesian framework for causal inference. But before doing do, they issue a warning, which I think is both spot on and hilarious (p19) The next section introduces the technical machinery of our hierarchical Bayesian framework. If...
Read more...
-
Aug 27, 2016
Crazy actions during GSL season 2 of 2016 (Ro32 games).
Nexus save:
Stats attacks Myungsik, and Myungsik defends his third. Notice the phoenix micro.
Attack starts here: https://youtu.be/cejdcLTyU1k?t=3m29s
Phoenix micro here: https://youtu.be/cejdcLTyU1k?t=3m55s
Warp prism micro:
Catching stalkers with a prism, and latter using the prism to pull back injured units:
https://youtu.be/Jvey_KeMB9g?t=33m27s
Read more...
-
Aug 25, 2016
Good housing block design is key to a prosperous city. A few tips about designing such blocks can be found here and here. In this post I’ll list some palace blocks: blocks where the housing level is luxury palace only. A couple of notes: What constitutes a good block? stability: housing must be rock-solid and...
Read more...
-
Aug 14, 2016
There is a big shift taking place in programming, and odds are that it will continue
Read more...
-
Aug 14, 2016
From Deep Convolutional Inverse Graphics Network: Various work [3, 4, 7] has been done on the theory and practice of representation learning, and from this work a consistent set of desiderata for representations has emerged: invariance, meaningfulness of representations, abstraction, and disentanglement. In particular, Bengio et al. [3] propose that a disentangled representation is one...
Read more...
-
Aug 8, 2016
Probabilitsic models of cognition is a truly great idea. It is the kind we need in AI in order to make tangible progress. Side note: because the content is not easily accessible, few people can appreciate how great the content is, and this is a bit sad. Anyhow, these ideas are really great (If you...
Read more...
-
Aug 1, 2016
The reply by “nv-vn” here reminds me of how we’ve lost track about the original purpose of AI. We’ve lost track so much that we now stick a ‘G’ it the middle. A lot of the comments here seem to focus only on the progress we’ve made rather than what AI should be (if it...
Read more...
-
May 30, 2016
He is a spanish composer and classical guitarist of the Romantic period. One of the only romantic composers that I actually like. I highly recommend if you want to put your baby to sleep while also enjoying the music. And also he composed the nokia tune (in Grad Vals), as well as more some stuff:...
Read more...
-
May 25, 2016
I am now finally writing this in carpalx. It’s been 3 days since the complete switch on my home computer.
It feels horrible. But every day less so.
Read more...
-
May 24, 2016
In this video, Noah Goodman presents his “Probabilistic Language of Thought Hypothesis”: Mental representations (concepts) are functions in a stochastic lambda calculus (eg. Church). It seems crazy indeed. A thought, or the meaning of a sentence, would be an expression in a probabilistic language. Even more crazy is that it seems to work. It can...
Read more...
-
May 20, 2016
Josh Tenenbaum and Noah Goodman have this concept that the arrows in a bayes net (or graphical* model) need to be infinitely thick, and that’s a big motivation behind probabilistic programs. *graphical model in the sense of “graph” here. Not “graphics”. Josh Tenenbaum talks about thick arrows here: Engineering & reverse-engineering human common sense Noah...
Read more...
-
May 19, 2016
This really sums up what I think about the field of AI in general. In addition to having a potentially
very high impact, it is intellectually satisfying.
As Josh Tenenbaum puts it in this video (Engineering & reverse-engineering human common sense)
[…] and most of the cool work is in the future.
Read more...
-
May 15, 2016
Hammerspoon is a utility for mac that turns Lua code into UI behaviour, the same way Karabiner turns XML code into keys remapping. It will be my tool of choice for interacting with the UI: focusing, moving and resizing windows around. It is a descendant of the (defunct and/or inactive slate and mjollnir) A couple...
Read more...
-
May 15, 2016
In this video: AGI 2011 - Probabilistic Programs: A New Language for AI, Noah Goodman says at 42:39: [Lambda calculus] is all we need. As the basis for creating a probabilistic programming language (Church in this case). A couple of points about the video: This is a great intro to probabilistic programming languages, and why...
Read more...
-
May 12, 2016
I don’t yet fully understand what probabilistic programming is, but this talk made it a lot clearer: “An Overview of Probabilistic Programming” by Vikash K. Mansinghka To a very gross approximation: this is the current process with which you design machine learning algorithms. Probabilistic programming is about automating the 3 maths bits (with the sigma...
Read more...
-
May 11, 2016
Experts are pretty sure that the elusive ANI to AGI, and then the rapid AGI to ASI transitions will be occurring within our lifetimes. Which is a good thought to have in the back of your mind when pondering the question: “What the hell should I do with my life ?” We each are in...
Read more...
-
May 8, 2016
I want to throw yet another random idea. I’ll develop it more in future posts, but here is the gist of it: You can think of CNNs, Boltzman machines or deep Boltzman machines as learning a set of features, and representing a particular input vector as a linear combination of those features. In the same...
Read more...
-
May 7, 2016
From this lecture:
Ruslan Salakhutdinov: “Advanced Hierarchical Models”:
All I have to say is: Hell yes.
Read more...
-
May 6, 2016
It is possible to get to GM by cannon rushing. If you want to know how, just checkout the Printf’s channel: https://www.twitch.tv/quasarprintf/v/63725910 He is always going cannon rush: on any map and against any race. Overall strategies are: Vs protoss: go for a straight kill with cannon. Try to kill the nexus. Vs zerg: contain...
Read more...
-
May 5, 2016
Recently, I briefly talked about Hinton’s capsule theory. It has its roots in this paper: Transforming Auto-encoders. (I don’t know which one came first, capsules or transforming autoencoders. But they are clearly similar). In the discussion section, one limitation of capsules is brought up: Using multiple real-values is the natural way to represent pose information...
Read more...
-
May 4, 2016
There was this post on the top of reddit singularity: Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence? Due to its clickbaity nature, I couldn’t resist, and landed on the page untitled: “Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence? A Conversation With Gary Marcus”...
Read more...
-
May 2, 2016
There was an author coming up a lot in the papers I read recently; and I think more importantly, he is one of the few (if not the only one) to have worked with both Hinton and Tenenbaum (two cognitive science / machine learning researchers I love. Yes, my love stories are weird). Salakhutdinov Publications...
Read more...
-
Apr 29, 2016
My journey into understanding reasoning is getting started. I was missing a good set of references about the study of reasoning. In this video: Development of Intelligence - Josh Tenenbaum: Bayesian Inference, Tenenbaum has a slide about a bunch of authors about different subjects… For reasoning, we have: Chater, Oaksford, Sloman, McKenzie, Heit, Kemp and...
Read more...
-
Apr 28, 2016
I first heard about GPGP in this talk by Josh Tenenbaum: Two architectures for one-shot learning It is similar to the idea of analysis by synthesis / Helmholtz machines: Analysis by synthesis (vanilla) Analysis by synthesis with DNN GPGP is what happens when you take those ideas and throw probabilistic programming into the mix (as...
Read more...
-
Apr 22, 2016
Let’s say you were told by someone that is always right that anything you attempt is going to succeed. What would you try to do ? In other words, what is the coolest thing you can think of ? As a member of Homo Sapiens, a mammal, a living thing, a concious being or whatever,...
Read more...
-
Apr 20, 2016
The long awaited ApolloStack (that will enable SQL support with meteor) is finally coming together.
The high-level info can be found here: http://www.apollostack.com/
The documentation here: http://docs.apollostack.com/
They mention the existence of https://astexplorer.net/ which I didn’t
know about and is pretty cool.
Read more...
-
Apr 18, 2016
I wrote previously about my attempt to design, learn and use a better keyboard. Towards a better keyboard Screw it, I’m learning QGMLWY Layouts review: Dvorak vs Colemak vs Carpalx vs Workman Special characters layout All those ideas are getting incorporated into the keybest repository. The design is mostly complete, I still need to learn...
Read more...
-
Apr 17, 2016
Quick review of the main event of the StarLeague season 1. The big story is that Dark is wrecking everyone. But there are a couple of other things to notice: Zerg is much more present than I expected it to be. Very few terrans in the RO16 (7 protosses, 6 zergs and only 3 terrans)...
Read more...
-
Apr 16, 2016
Right in the middle of this AMA of Geoff Hinton, there is this question: Hi Professor Hinton, Since you joined Google lately, will your research there be proprietary? I’m just worried that the research done by one of the most important researchers in the field is being closed to a specific company. And Hinton replies:...
Read more...
-
Apr 15, 2016
Today I stumbled upon Holographic Reduced Representations (HRR). By chance, it ties nicely with the last AI-ish post and the part-hole paper. HRRs are introduced by Tony Plate in this paper: Holographic Reduced Representations as well as this book, also by Plate: Holographic Reduced Representation: Distributed Representation for Cognitive Structures HRRs are also used in...
Read more...
-
Apr 12, 2016
I didn’t know Bill Atkinson before this triangulation episode. His anecdote about how Steve Jobs recruited him is fantastic (at about 5:00). But also he has kept up with the neuroscience. At about 10:50, he starts talking about the neocortex and HTM theory. It is quite refreshing to hear about HTM theory from someone else...
Read more...
-
Apr 12, 2016
Lesswrong has a couple of great articles about wireheading. One interesting question, to me, is this: If, given the choice, you could enter a state of maximum pleasure for a very long time, would do do it? Would you decide to turn into a wirehead? Some details can affect the decision somewhat, such as: how...
Read more...
-
Apr 10, 2016
My current guess about the state of AI is that perception/inference is mostly nailed down, and a big remaining challenge is reasoning. I’ve explored the topic of artificial reasoning a bit in CM topologies part1 and part2. In this post, I’ll take a higher level view by reviewing an old Hinton paper: Mapping Part-Whole Hierarchies...
Read more...
-
Apr 9, 2016
For a long time, I did not get how contrastive divergence (CD) works. I was stumped by the bracket notation, and by “maximizing the log probability of the data”. This made everything clearer: http://www.robots.ox.ac.uk/~ojw/files/NotesOnCD.pdf. Local copy here (in case website is down) The only math needed is integrals, partial derivatives, sum, products, and the derivative...
Read more...
-
Apr 8, 2016
I’ve been impressed with Zest play in the latest GSL.
Zest vs Cure:
game1
game2
Zest vs Soo:
game1
game2
Zest vs TaeJa:
game1
game2
Zest vs Journey
game1
game2
game3
I love the safe play with gateway units, and the willingness to go lategame when necessary.
Read more...
-
Apr 7, 2016
The Deese–Roediger–McDermott (DRM) paradigm is fascinating. The procedure typically involves the oral presentation of a list of related words (e.g. bed, rest, awake, tired, dream, wake, snooze, blanket, doze, slumber, snore, nap, peace, yawn, drowsy) and then requires the subject to remember as many words from the list as possible. Typical results show that subjects...
Read more...
-
Apr 6, 2016
@ThePracticalDev has a lot of good books recommendations. I’ll comment some of them. Now a crucial skill every good programmer should have. Pro-tip for advanced JIRA users: take the time to add “Pebkac” to the list of possible issue resolutions. This way, you will be able to close most of your issues as “Pebkac” and...
Read more...
-
Apr 4, 2016
50 days ago, I gave up my hard-won Dvorak typing speed in favor of QGMLWY. But I kept practicing (forcing myself at least 1 minute of practice everyday, and averaging about 25.6m per day, so a total of 21.36 hours. More than the expected “5 to 10 hours”, but still). And the good thing is...
Read more...
-
Apr 3, 2016
Recently I’ve played some factorio. The game being about automation, it fits nicely with the theme of the blog. Quick tangent: The factorio community has produced some great layouts. I’m especially loving those bus patterns by myhf (1 2 3) factorioblueprints also have some great layouts such as this All in one 5/6/12 science (up...
Read more...
-
Apr 3, 2016
From this interview with Stephen Wolfram: (In the video at 54:45) The great frontier 500 years ago was literacy. Today, it’s doing programming of some kind. Today’s programming will be obsolete in not very long. For example, when I was first using computers in the ’70s, people would say, “If you’re a serious programmer, you’ve...
Read more...
-
Apr 2, 2016
This paper is a great summary of some ideas about AI: Towards Machine Intelligence. Abstract. There exists a theory of a single general-purpose learning algorithm which could explain the principles of its operation. This theory assumes that the brain has some initial rough architecture, a small library of simple innate circuits which are prewired at...
Read more...
-
Apr 1, 2016
Some data visualisation tools: Closed source Tableau Create visualisation with a drag and drop interface. Used by almost everybody. Looker Mode analytics SQL editor, UI to create charts, export and share Periscope “Type SQL, get charts”. Redshift friendly. Very good technical blog. (Eg: Redshift guide and periscope data cache) Chartio Qlikview Gooddata Free/Open source Caravel...
Read more...
-
Mar 31, 2016
captionbot.ai works pretty well.
It probably is a CNN hooked with a RNN (LSTM or GRU). The CNN transforms the image into a feature vector.
The feature vector is then fed into the RNN to produce language.
Read more...
-
Mar 30, 2016
Lets take those three things and analyse them one by one. Then together. And see what we find. Monads (better understood in pictures) They are values wrapped inside contexts. And we can define operations and say what should happen given the context, eg: instance Functor Maybe where fmap func (Just val) = Just (func val)...
Read more...
-
Mar 28, 2016
Geoffrey Hinton explains CD (Contrastive Divergence) and RBMs (Restricted Boltzmann Machines) in this paper with a bit of historical context: Where do features come from?. He also relates it to backpropagation and other kind of networks (directed/undirected graphical models, deep beliefs nets, stacking RBMs). This has stayed open in my chrome tabs for a few...
Read more...
-
Mar 26, 2016
Boredom is a useful terminal value of ours. I am easily bored. The fix so far has been to research topics which I am interested in more deeply. But then the inferential gap grows wide. This has been both a curse and a blessing. Side note: Viewing boredom as a “terminal preference for a stream...
Read more...
-
Mar 24, 2016
If you are a bit bored or you don’t know what to read next, go there: http://www.cs.toronto.edu/~hinton/papers.html Or there http://people.idsia.ch/~juergen/onlinepub.html Going there and randomly clicking has been a great strategy over the past months to discover ideas I didn’t even suspect. More recently, going there: https://deepmind.com/publications.html has proven enlightening as well. I wish those papers...
Read more...
-
Mar 23, 2016
An interesting hypothesis of neocortical function is explained here: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects We describe a model of visual processing in which feedback connections from a higher- to a lowerorder visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the...
Read more...
-
Mar 20, 2016
This is part 2 of the CM Topologies series. Part 1 is here So you’ve read the CM paper and are ready for some RNN wiring. Great! First off, quick recap of the algorithm for training C and M: Algorithm 1: Train C and M in Alternating Fashion Initialize C and M and their weights....
Read more...
-
Mar 19, 2016
I want to start a series of post discussing CM systems (controller-model systems). CM systems are the subject of this paper: On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models. The discussion really takes off in section 5: (C is the controller and M the...
Read more...
-
Mar 18, 2016
I want to go over the different typing tools to learn to type faster (hopefully to learn another layout) Typing Cat Great to discover and learn new layouts. Simple UI, slightly annoying sometimes. Ztype Space-invaders style game, type to shoot enemy ships. The game aspect helps to stay focused. The score charts are nice. Keybr...
Read more...
-
Mar 17, 2016
Alternative keyboard layouts are relatively easy to find. But finding a special characters layout is much harder. A special characters layout is the arrangement of special characters such as =, +, ), #… The idea is to map them on top of the normal layout, and access them with a custom modifier key. If there...
Read more...
-
Mar 15, 2016
The basic idea is being able to decouple the structure of the data in the UI from the structure of the data in the database. The solution is a query language used in the UI (graphql or a datomic-datascript-like syntax). Each UI component can declare its data requirements in this language, and then we can...
Read more...
-
Mar 14, 2016
Eliezer Yudkowsky made a nice commentary about Alphago, after the 2nd game has been played. (also posted on this HN thread) I’ll go over some of those points because I think they are accurate and sometimes provoking. Edge instantiation. https://arbital.com/p/edge_instantiation/ Extremely optimized strategies often look to us like ‘weird’ edges of the possibility space, and...
Read more...
-
Mar 13, 2016
Or: Do yourself a favour: bust your cached values and rebuild them from source. On this thread: Why do we work so hard?, nostrademons comments: (emphasis mine) Nah, it’s that folks who get their values from other folks are the only ones who bother to tell other folks how well they’re doing at achieving their...
Read more...
-
Mar 12, 2016
(I am referring to Javascript-the-landscape, not Javascript-the-language) Javascript is moving very vast. Every week, there is a new slew of libraries. How well do I fare on javascript awareness ? Surely I am doing ok. If you think you are too, read this little excerpt from State of the Art JavaScript in 2016: The first...
Read more...
-
Mar 11, 2016
I want to go over two recent interviews of Jürgen Schmidhuber (Director of IDSIA), and Demis Hassabis (founder of Deep Mind). #Will AI Surpass Human Intelligence? Interview with Prof. Jürgen Schmidhuber on Deep Learning Highlights: InfoQ: Can machines learn like a human? Schmidhuber: Not yet, but perhaps soon. See also this report on “learning to...
Read more...
-
Mar 10, 2016
In a recent interview, Eliezer Yudkowsky was asked How does your vision of the Singularity differ from that of Ray Kurzweil? His response, with some comments: I don’t think you can time AI with Moore’s Law. AI is a software problem. I don’t think that humans and machines “merging” is a likely source for the...
Read more...
-
Mar 9, 2016
This morning, DeepMind made history by beating Go world champion Lee Sedol. (4 more matches will be played in the coming days). On this HN thread, clickok made some great comments: AlphaGo underwent a substantial amount of improvement since October, apparently. The idea that it could go from mid-level professional to world class in a...
Read more...
-
Mar 7, 2016
In this post, I’ll give my view on some of the most popular optimized keyboard layouts. Note: when I talk about the Carpalx, I am talking specifically about the QGMLWY layout, one of the fully optimized layouts. Quick disclosure: I have experience with Dvorak and Carpalx, and none whatsoever with Colemak and Workman. I’ve included...
Read more...
-
Mar 6, 2016
My little quest to find a theory of reasoning is having some hiccups. I’ve sketched some ideas previously, but it’s all hazy and unclear. So I’ll take a little detour in the field of neuroscience. Luckily, Coursera and Edx have me covered with two courses which I highly recommend: Computational Neuroscience, University of Washington, on...
Read more...
-
Mar 5, 2016
Untill recently, I wasn’t aware that you could juggle “4 low”. That is, juggling 4 diabolos without constantly throwing them into the air. 4 is so hard to manage on the string that you don’t have much room to do any tricks. But still, you can. Here is all the footage I could find about...
Read more...
-
Mar 4, 2016
This post is a followup on the post about Champernowne indexing. The basic idea here is that a lot of regularities can be found in the world. (You could call it structure, repetition, duplication…). Let’s say we have the general goal of representing this repeated data. We could represent it as it is: by explicitly...
Read more...
-
Mar 3, 2016
As an actual owner and user of both of these bikes, I will give my impressions about each. Quick history: I first had the Strida EVO for about one year, and got it stolen. I then bought an IF Mode. It’s been a year and I still use it. I have used both bikes fairly...
Read more...
-
Mar 2, 2016
In the last episode of Transmission, Ben Strahan asks Geoff Schmidt to provide some “fun insights” and to “dream for a bit”. Geoff’s reply: Let’s open Geoff’s bad ideas file (2:50 in the video) What follows is his vision about what app development could become. I won’t bother trying to summarize it, I’ll just let...
Read more...
-
Feb 29, 2016
Let’s consider the general problem of representing data in a compact way. A naive strategy would be to represent an arbitrary sequence of bits as itself. And if there are some regularities, maybe we could find the repetition and compress it: by representing the compressed bits as themselves, and a way to specify how the...
Read more...
-
Feb 28, 2016
Let’s consider the “mission” of some of the leading AI organisations: Deep mind Numenta nnaisense Vicarious haven’t publicly said what its mission is, but it’s probably very close. Related to this is the goal of Jürgen Schmidhuber: Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving...
Read more...
-
Feb 27, 2016
A little while ago, Sasha Greif wrote about what went wrong with meteor. And on top of that, MDG still hasn’t solved two long standing problems: better support for NPM packages, and support for other data sources (and in particular SQL). 1.3 is about to land soon, and fixes the NPM packages support, and adds...
Read more...
-
Feb 26, 2016
For any good programmer constantly improving his toolbelt, exploring and assessing new programming languages is very natural. Assuming programming languages vary in power - and they do - some have to be at the top. And what’s at the top ? Lisp Then why aren’t we all programming in lisp ? Something is wrong with...
Read more...
-
Feb 25, 2016
Spoiler: Not much. I’ll quote Sam Harris in this little talk: All you have to grant to get your fears [about AI] up and running is that: We will continue to make progress in hardware and software design (unless we destroy ourselves some other way), and There’s nothing magical about the wetware we have running...
Read more...
-
Feb 24, 2016
Warning: The ideas discussed here are half-baked, and I do not understand them fully In this post, I want to elaborate on the view that the brain builds a model of the world. Not a sensory-motor model, even tough senses and motor commands are definitely part of the model; but a model of the structure...
Read more...
-
Feb 23, 2016
Some of the AI people that most influenced my thinking : Douglas Hofstadter - Geb Geoffrey E. Hinton - Deep learning, google, University of Torronto Jürgen Schmidhuber - Swiss AI Lab Jeff Hawkins - Numenta Nick Bostrom - Superintelligence Eliezer Yudkowsky - Less Wrong Steven Pinker - How the mind works Josh Tenenbaum - lectures...
Read more...
-
Feb 21, 2016
In a similar fashion of the post AGI: what is missing ?, I want to give a brief overview of some recent ideas about ai/intelligence. The selection criteria for each idea is that they seem good on a subjective level, and that they crossed my mind at the moment I made the list. #1 Driven...
Read more...
-
Feb 20, 2016
One week ago, I made the choice to learn the QGMLWY layout. I knew this was going to be a difficult choice. My progress has been promising: After the first day (~2hrs of training), I knew the home row. Now, after a week of training with an average of about 20min of practice each day,...
Read more...
-
Feb 19, 2016
We humans are capable of elaborate, high-level analogies. Douglas Hofstadter gives a good example in this video. In this post, I want to propose a simple way a machine could do the same. First, we need to remember that an analogy is just a partial equality of structure between two mental representations. For example, if...
Read more...
-
Feb 18, 2016
First of all, I invite you to read this weird-ass poem by Dylan Thomas, 1914 - 1953. You probably first saw it from the movie interstellar. Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men...
Read more...
-
Feb 17, 2016
“Make the Back-End Team Jealous: Elm in Production” by Richard Feldman Favorite part (8:28): My tests thoroughly cover the design… That no longer exists Kind of a problem And the consequence is time “Eve” by Chris Granger Lots of ideas and experimentation. Heavily inspired from Datomic. Love the demo at 23:34. “Kolmogorov music” by Christopher...
Read more...
-
Feb 15, 2016
From this talk by Derek Slager: ClojureScript for Skeptics, there is a nice section about the “Shaky Foundation” of javascript: If you ever want to be entertained, if your’re bored, go to stackoverflow, click the javacript tag and find the most popular stuff. And you will find examples after examples of this And then he...
Read more...
-
Feb 14, 2016
In a previous post, I reviewed some alternative keyboard layouts. Up to this point, I was in the process of switching to Dvorak. I had trained myself quite a bit, probably somewhere around ~10 to 20 hours, with a typing speed of 61 WPM. Discovering new, more efficient keyboard layouts posed the following dilemma: sticking...
Read more...
-
Feb 13, 2016
List of AI books, with an approximate rating out of 10. Books I’ve read: The quest for conciousness, Christof Koch (7) On intelligence, Jeff Hawkins (8) GEB, Douglas Hofstadter (9) Fluid concepts and creative analogies, Douglas Hofstadter (9) Superintelligence, Nick Bostrom (9.5) How the mind works, Steven Pinker (9) How to create a mind, Raw...
Read more...
-
Feb 12, 2016
Distractions are one of the few factors that affect productivity the most. When I talk about dealing with distractions, I’m talking in the sense of dealing with procrastination and akrasia. One technique that definitively didn’t work for me is thinking of procrastination as not having the control over my actions, as if a monkey was...
Read more...
-
Feb 11, 2016
One very useful self-improvement technique is self deception: being able to lie to yourself. What if you could just tell yourself: “let’s just do this hard work, it’s going to take a couple of minutes”; when in reality it takes a few hours. This way you can actually get it started, which is the key...
Read more...
-
Feb 10, 2016
In a previous post, I enumerated a few approaches/theories about how intelligence might work. In this post, I’ll explore my current “best” guess. Keep in mind that my “best” guess changed over time. I would consider some of my previous best guesses wrong, so by extrapolation, my current best guess is also likely to be...
Read more...
-
Feb 8, 2016
Qwerty and querty-like keyboard layouts sucks. So ths first thing to do if we want a better keyboard is to let go of this atrocious layout. What to choose then ? This website has rated a lot of alternatives, and also has full-optimization QMLW layouts. Quick confession: I discovered the QGMLWY layout today, having spent...
Read more...
-
Feb 7, 2016
In this post, I want to explore the idea of querying a generative black box as a way to get infinite labelled training data. A generative black box is any program P that can generate a data vector x given a description vector (or code vector) y. It could be for example a 3d graphics...
Read more...
-
Feb 6, 2016
Google Research Blog article Article on deepmind website (with videos) paper (published in Nature on 28th January 2016) Go ratings website Game 5 commentary Estimated ELO ranking of Alphago (as of writing of the paper, jan 2016) : 3140. The ELO was based on a tournament between different Go programs. (See Extended Data Table 6...
Read more...
-
Feb 5, 2016
Every time I set out to write a blog post, I impose myself a “high standard”. Even having decided to churn out piles of clay, the tendency to want everything to be perfect comes back : the post must be lengthy, every idea must be carefully thought out. Well, no. Here’s what will happen instead:...
Read more...
-
Feb 4, 2016
In this information age, we consume a lot of content. But only a fraction of it we remember, and a smaller fraction still makes us change our minds. I’ll show a brief list of some of the stuff that changed my mind the most. How to do what you love Paul Graham articulates very nicely...
Read more...
-
Feb 3, 2016
Meteor started out with 7 principles (strangely, is does not show up anymore in the docs, but anyway). Let’s go through the list: 1) Data on the Wire The idea is that unlike in ajax, pjax, or turbolinks, your server will not return HTML that you display directly. In the case of Meteor, the data...
Read more...
-
Feb 2, 2016
The Eliezer scale, found on this post, is the following: We’re going to build the next Facebook! We’re going to found the next Apple! Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a...
Read more...
-
Feb 1, 2016
It’s been a while since we heard great promises from AI proponents. Back in 1965, I.J. Good writes Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could...
Read more...
-
Jan 31, 2016
In this post, I will briefly introduce the game agar.io in Game mechanics part 1 and 2 and then reveal some pro-tips in the strategy section. Examples will be illustrated with short clips wherever possible. The main goal being to show how rich strategy tactics can emerge from seemingly simple game mechanics. The agar.io game...
Read more...
-
Jan 30, 2016
Jürgen Schmidhuber has written a fair amount of cool stuff. He has a tendency to use many acronyms in his papers. This post is an attempt to summarize some of them. Papers: On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models Deep Learning in Neural...
Read more...
-
Jan 29, 2016
To get started, I’ll challenge myself with the following: One post everyday. OK to skip one post per week (so >= 6 post/week) Every post is at least 50 words Subject doesn’t matter Quality doesn’t matter Until 2017-01-01 That’s a commitment of at least 288 posts and a minimum of 14400 words. (looks manageable !)...
Read more...