>It was academia.edu, and who can possibly explain how they were able to get a domain name in the .edu TLD?
Relevant section From Wikipedia:
>Academia.edu is not a university or institution for higher learning and so under current standards it would not qualify for the ".edu" top-level domain. However, since the domain name "Academia.edu" was registered in 1999, before the regulations required .edu domain names to be held solely by accredited post-secondary institutions in the US, it is allowed to remain active and operational. All .edu domain names registered before 2001 were grandfathered in, even if not an accredited USA post-secondary institution.
There are some 10minutemail / trashmail providers out there who provide .edu emails - great to get benefits which are only for students free, but sucks for everbody who is implementing these platforms to get those benefits because they can't just check if the domain ends on .edu but rather need to validate against a common list of valid universities...
> rather need to validate against a common list of valid universities
Don't you need that already anyway? There's no standard for how universities format their academic email addresses.
Plus, .edu only applies to American universities. Services validating if you're a "real" student by checking for .edu emails were quite annoying during my time as a student. A lot of these platforms don't seem to even know .edu is an American thing.
Despite originally saying it was a perk of graduation, mine ended cutting access after 10 years by citing cost saving (I imagine Google Workspace bills add up quickly, compared to self-hosting email). I wouldn't be surprised if this is the trend now.
I distinctly remember that Freshman year, we were told in some big auditorium, during onboarding, that we should start using our university-provided emails as our primary "professional" emails.
Someone then asked what happens when we graduate and lose access to those emails, and they didn't have a particularly good answer.
I think that was also the same onboarding where they passed around a piece of paper for us to sign in with our name and social security number.
I still get the occasional email to mine inviting me to student events. Not sure they realize I graduated, but it was long enough ago that some of the buildings the events are held in didn't exist when I attended.
I believe there's a vicious circle of a few companies starting to use AI with an actual idea, then shareholders of other companies say "we need to use AI as well, it works for them !" and then more companies start using AI to "not fall behind", etc... All with very few actual use cases. 99% are doing it just because others are.
The owner lives in London and rarely visits but he has arranged for AI consultants to come in and workshop with us to see how "AI can help the business". Our operations mainly consist of data entry.
Isn't data entry a really good usecase for the LLM technologies? Of course depending on the exact usecase. But most "data entry" jobs are data transformation jobs and they get automated using ML techniques all the time. Current LLMs are really good at data transformation too.
If your core feature is data entry, you probably want to get as close to 100% accuracy as possible.
"AI" (LLM-based automation) is only useful if you don't really care about the accuracy of the output. It usually gets most of the data transformations mostly right, enough for people to blindly copy/paste its output, but sometimes it goes off the rails. But hey, when it does, at least it'll apologise for its own failings.
> All with very few actual use cases. 99% are doing it just because others are.
Same here, but I started a few months earlier than most (I work in a marketing department as the only one with SWE skills). There's a lot you can do with AI.
For one, you can finally introduce some more automation, they are more open to it. And whenever you need a more "human-like intelligence" in your automation, you basically make an LLM call.
It also helps in terms of creating small microsites, etc.
It helps me personally because whenever I want to make a small command-line tool to make my life easier, I can now also decide to create a whole website as it's about as quick to make nowadays with things such as Codex and Claude Code (aka 30 min.).
Aha, no, Transmeta was a totally different thing, from the early 2000s. The idea there was they would have a special "Very Long Instruction Word" processor, kind of the opposite of RISC, where a lot of things would be embedded into a single 128-bit opcode. Think of it as being hell of a wide horizontal microcode architecture, if RISC is kind of a vertical microcode architecture.
It was pretty clever. You loaded x86 code (or Java bytecode, or Python bytecode, or whatever you felt like) and then it would build up a table of emulation instructions that would translate on-the-fly to run x86 natively on the Crusoe's ludicrous SUV of an instruction set. They were physically smaller and far less power-hungry than an equivalent x86 chip, even though they were clocked roughly 30% faster.
25 years ago they were going to be the future of computing, and people stayed away in droves. Bummer.
No no no, though, the transputer was a totally different thing. That was from 40-odd years ago, and - like the ARM chips we now use in everything - was developed in the UK by a company that did pretty okay for a while and then succumbed to poor management.
They were kind of like RISC processors. Much has been made of "you programmed them directly in microcode!" but you could say the same of any wholly combinatorial CPU, like the Good Ol' 6502, where the byte that's read on an instruction fetch directly gates things off and on.
The key was they had very very fast (like 10Mbps) serial links that would connect them in a grid to other transputers on a board. Want to run more simultaneous tasks? Fire in more chips!
You could get whole machines based on transputers, or you could get an ISA card that plugged into a 16-bit slot in your PC and carried maybe eight modules about the size of a Raspberry Pi Zero (and nowhere near as powerful). I remember in the late 80s being blown away by one of these in some fairly chunky 386SX-16 doing 640x480x256 colour Mandelbrot sets in like a *second*.
Again, they were going to revolutionise computing, this is the way the world was going, and by the time of the release of unrelated Belgian techno anthem Pump Up The Jam, transputers were yet another footnote in computing history.
Wow, the Mandelbrot set example really put things into perspective.
Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.
Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).
Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.
It follows the spam economy. If you can use AI to generate thousands of "articles", some unlucky google user is bound to click on your link. When the price of articles is near zero, it is still profitable.
Academia.edu might be the most useless and spammiest service out there. They don't seem to offer anything of value, but you can't know that before you pay.
You might just as well say “The bifurcation of neural spines in sauropods can be likened to Marcel Proust’s seven-volume masterwork À la Recherche du Temps Perdu.”
It would be exactly as meaningful.
Relevant section From Wikipedia:
>Academia.edu is not a university or institution for higher learning and so under current standards it would not qualify for the ".edu" top-level domain. However, since the domain name "Academia.edu" was registered in 1999, before the regulations required .edu domain names to be held solely by accredited post-secondary institutions in the US, it is allowed to remain active and operational. All .edu domain names registered before 2001 were grandfathered in, even if not an accredited USA post-secondary institution.
Don't you need that already anyway? There's no standard for how universities format their academic email addresses.
Plus, .edu only applies to American universities. Services validating if you're a "real" student by checking for .edu emails were quite annoying during my time as a student. A lot of these platforms don't seem to even know .edu is an American thing.
Considering that many universities provide email addresses to alumni, I don't think that heuristic would work either.
I wonder what the benefit is.
Someone then asked what happens when we graduate and lose access to those emails, and they didn't have a particularly good answer.
I think that was also the same onboarding where they passed around a piece of paper for us to sign in with our name and social security number.
I'm not sure how common it is, but my wife has an edu email address despite being well over twenty years from graduation.
1. SOA and later micro services 2. Big data & MongoDb 3. Kubernetis 4. Blockchain
The owner lives in London and rarely visits but he has arranged for AI consultants to come in and workshop with us to see how "AI can help the business". Our operations mainly consist of data entry.
"It says 'no shellfish', go ahead - eat it"
Even with lots context the various services we tried would get something wrong.
e.g. huile is oil in French and sometimes it would get translated as "motor oil"
"AI" (LLM-based automation) is only useful if you don't really care about the accuracy of the output. It usually gets most of the data transformations mostly right, enough for people to blindly copy/paste its output, but sometimes it goes off the rails. But hey, when it does, at least it'll apologise for its own failings.
Same here, but I started a few months earlier than most (I work in a marketing department as the only one with SWE skills). There's a lot you can do with AI.
For one, you can finally introduce some more automation, they are more open to it. And whenever you need a more "human-like intelligence" in your automation, you basically make an LLM call.
It also helps in terms of creating small microsites, etc.
It helps me personally because whenever I want to make a small command-line tool to make my life easier, I can now also decide to create a whole website as it's about as quick to make nowadays with things such as Codex and Claude Code (aka 30 min.).
https://www.theregister.com/2003/06/17/linus_torvalds_leaves...
It was pretty clever. You loaded x86 code (or Java bytecode, or Python bytecode, or whatever you felt like) and then it would build up a table of emulation instructions that would translate on-the-fly to run x86 natively on the Crusoe's ludicrous SUV of an instruction set. They were physically smaller and far less power-hungry than an equivalent x86 chip, even though they were clocked roughly 30% faster.
25 years ago they were going to be the future of computing, and people stayed away in droves. Bummer.
No no no, though, the transputer was a totally different thing. That was from 40-odd years ago, and - like the ARM chips we now use in everything - was developed in the UK by a company that did pretty okay for a while and then succumbed to poor management.
They were kind of like RISC processors. Much has been made of "you programmed them directly in microcode!" but you could say the same of any wholly combinatorial CPU, like the Good Ol' 6502, where the byte that's read on an instruction fetch directly gates things off and on.
The key was they had very very fast (like 10Mbps) serial links that would connect them in a grid to other transputers on a board. Want to run more simultaneous tasks? Fire in more chips!
You could get whole machines based on transputers, or you could get an ISA card that plugged into a 16-bit slot in your PC and carried maybe eight modules about the size of a Raspberry Pi Zero (and nowhere near as powerful). I remember in the late 80s being blown away by one of these in some fairly chunky 386SX-16 doing 640x480x256 colour Mandelbrot sets in like a *second*.
Again, they were going to revolutionise computing, this is the way the world was going, and by the time of the release of unrelated Belgian techno anthem Pump Up The Jam, transputers were yet another footnote in computing history.
Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.
Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.
It still took all night to render a Lyapunov set.
* https://www.youtube.com/watch?v=wLAFy7o7Zvo
* https://www.youtube.com/watch?v=ZxoODPQ4CTM
* https://www.youtube.com/watch?v=ENnAa7rqtBM
* https://www.youtube.com/watch?v=0heT2_OX8bY
* https://www.youtube.com/watch?v=oGxDVXGRQpY