Shell sort is sooo much faster than Bubble sort for tiny microcontrollers, for only a little bit more flash memory, like 40-100 bytes. If that's too much, then Insertion sort is 6X faster than Bubble sort, for only 10-20 bytes of extra flash.
I'll say this about IBM: because it's so old, it was the most diverse company I ever worked for- including age, nationality, race, sex, and any other category you can think of. Basically you had all types of people in all stages of life, not just young white workaholic tech-bros. The founders are long gone, so everyone there (including CEO) is a professional- meaning nobody has any kind of personal attachment to the company. We were all in the same boat, as it were. When your older coworker suddenly disappears due to a stroke, it puts things in perspective.
The fast-paced startup is really the hack, combining the energy of youth with the ego-mania of their founders. Ask yourself, is it healthy?
Anyway, IBM's customers tend to be other fortune 100s and governments- basically other similar organizations, and my experience was that we took care of them pretty well. The products were not pretty (no Steve Jobs-like person to enforce beauty), and rather complex due to all the enterprise requirements. But they were quite high quality, particularly the hardware.
The awe induced when standing in front of a brand new, kitted out x95 frame with all its drawers full and that special shade of IBM blue on everything is definitely something. Pull out the HMC and just think about how many decades of R&D and experience and tears went into the entire system.
Hint: by all means possible, make sure you are not the owner of (or manager of the person who owns) any assets beyond your personal laptop. If, for example, you end up being the owner of all the development and test servers of the original company, then it will become your responsibility to ensure that each OS (of each LPAR of each VM) is security compliant, is running the end-point asset manager, and has up to date OS patches, that the DASD is encrypted, and you must periodically show physical proof that the asset still exists and indicate where it's located- photos of assets tags or whatever. It will be your responsibility to dispose of the asset (with all associated paperwork) at the end of its life.
It helps if such machines are not actually on the 9. network, or are behind an internal firewall (then they don't care about the security compliance as much).
Probably, but now it's going to be formalized and will entail a lot of paperwork (manual entry on many very badly written JAVA-based CRUD applications). Sure, these things are all good ideas, but trust me, they have all been overthought. Do you want this to be your job?
I still "own" (i.e. I'm the sole user with a root access and can install OS of my choosing) an old machine from the days before everything moved to a cloud and guess no one from IT has got to decommission it yet. I'm have no idea where it is located (besides knowing which office it is assigned to), never saw it, no way in hell am going to attach any tags and waste my time to install enterprise spyware on it or manually encrypt it's data. Do engineers do that for development servers on your job? If yes, name and shame!
You can measure it by how many management steps you, as an employee of the recently acquired company are from the CEO in the hierarchy. As time goes on, this number tends to increase. It used to be easy to see this in Lotus Sametime or something that had some form of employee directory.
That's awesome. Before ~2007 they allowed you to use open-source Pidgin to connect to the Domino servers. A friend of mine and I used it to make a bot: if you sametimed me, you got Zork.
It reminds me of another IBM IT rule: they wanted your chat history (and email) older than two years to be all deleted for legal liability reasons. It was important to save your sametime chat history (an XML file) and export your email periodically if you wanted to keep this stuff.
This was actually better than Slack in one way- you could grep the files for things, and not have to rely on search within the tool.
I built many such oscillators same way. But I used BC107 NPN, connected collector to the ground, emmiter to resistor/capacitornet. Base free. Add LED in series with such connected transistor, and you have LED blinker.
I always assumed ultra-processed means that the food is loaded with preservatives like phosphates or BHT. I guess that's part of it, but maybe the efficiency of digestion should be considered. I remember Ben Krasnow (Applied Science) measured the calories in poop, humans are not very efficient at extracting all calories leading to very likely large efficiency variance between foods. But extending this further, the calories lost during preparation should be accounted for...
So how about: calories * digestion-efficiency - calories you personally need to expend to prepare or acquire it. The higher this number, the more processed is the food. So cane sugar is very bad, unless you personally harvested it.
Bad news for highly paid programmers.. basically all food should be considered ultra-processed since no physical labor was needed to acquire it.
A better example is astronauts. Their diet (on the job) is 100% ultra-processed food. They perform highly and have limited access to normal physical activity. But they’re hard to study because radiation and gravity differ so much that categorization of food might not be influential at all.
Ironically Bill Gates was big into UNIX, see his Xenix interview, and had they not gotten lucky with the whole MS-DOS deal, maybe they would have kept Xenix and who knows how that would have turned out.
Xenix was also my introduction to UNIX.
However due to our school resources, there was a single PC tower running it, we had to prepare our examples in MS-DOS using Turbo C 2.0, and API mocks, and take 15m turns at the Xenix PC.
> had they not gotten lucky with the whole MS-DOS deal, maybe they would have kept Xenix and who knows how that would have turned out.
Oh, absolutely, yes. It's one of the historical inflection points that's visible.
My favourites...
• MS wanted to go with Xenix but DOS proved a hit so it changed course.
• DR had multitasking Concurrent DOS on the 80286 in 1985, but Intel's final released chip removed the feature CDOS needed, so it pivoted to FlexOS and RTOSes, leaving the way open to MS and OS/2 and Windows.
• MS wanted OS/2 1.x to be 386-specific but IBM said no. As a result, OS/2 1.x was cripped by being a 286 OS, it flopped, and IBM lost the x86 market.
• Quarterdeck nearly had DESQview/X out before Windows 3: a TCP/IP enabled X11-based multitasking DOS extended that bridged DOS to Unix and open systems... but it was delayed and so when it appeared it was too late.
* GNU discussed and evaluated adopting the BSD kernel for the GNU OS, but decided to go with Mach. Had it gone for the BSD kernel, there would have been a complete working FOSS Unix for 386 at the end of the 1980s, Linux would never have happened, and Windows 3 might not have been such a hit that it led to NT.
I got whole series of articles out of this, titled in honour of Douglas Adam's fake trilogy about god...
Never saw one of those. Tandy computers did exist in the UK, and even here on the Isle of Man there was a single Tandy's store. (They weren't called "Radio Shack" here.) But while they sold lots of spares and components and toys, they didn't sell that many computers.
> I had kind of the reverse feeling: when the 486 came out, I knew those expensive SPARC and MIPS workstations were all doomed.
Well, yes. Flipside of the same coin.
Expensive RISC computers were doomed. Arm computers weren't expensive back then: they were considerably cheaper than PCs of the same spec. So for a while, they thrived, then when they couldn't compete on performance they moved into markets where they could compete on power consumption... which they then ruled for 30 years.
This worked in DOS, but was easily ported to Linux.
As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).
The lack of (easy) recursion in CPP is so frustrating because it was always available in assembly languages with even very old and very simple macro assemblers- with the caveat that the recursion depth was often very limited, and no tail call elimination. For example, if you need to fill memory:
; Fill memory with backward sequence
macro fill n
word n
if n != 0
fill n - 1
endif
endm
So "fill 3" expands to:
word 3
word 2
word 1
word 0
There is no way this was not known about when C was created. They must have been burned by recursive macro abuse and banned it (perhaps from m4 experience as others have said).
The other assembly language feature that I missed is the ability to switch sections. This is useful for building tables in a distributed fashion. Luckily you can do it with gcc.
reply