/me whips out a collapsible podium
rotflmao I have no Idea what you guys are talking about. I have no experience here.
In ye olden days, letters and such were encoded using as few bits as possible. Eventually the English world settled on ASCII, pretty much, in which basically everything could use 7 bits. Stuff got added, and ANSI was born, using all 8 bits in a byte.
Well, they didn't just have printed characters, but also control characters, decorations, and the like. So you could use a text file as a control file for a printer, for instance. Tab, line feed carriage return, etc., really meant something.
Microsoft more or less standardized on ISO 8859-1, which then became Windows Code Page 1251 after a few additions.
Time went on, and non-english people started using computers in their native languages, with their own ways of representing their own characters. So the same byte value could represent a wealth of different graphical or control characters, depending on what computer read it.
Numbers and letters in text files pretty much stayed the same, for compatibility's sake. But, some characters beyond that might not come out the same on every platform, if it is expecting a different encoding.
Now, there have multiple attempts to come up with a good encoding that can satisfy compatibility, yet also include the rest of the world's characters. Several have been D:, from a technical standpoint, including Unicode. It's kind of a neat story if you're geeky enough, but anyway, Ken Thompson figurd out UTF-8's basic encoding over a meal, and implemented in just a few days in Plan 9, saving the world from another bad character encoding standard. UTF-8 can encode most written languages with relative ease, and garbled data doesn't cause any major risks of ruining the rest of the file.
So, OK, what does that have to do with "Ken's"?
Well, UTF-8 is good, the world is migrating over to it entirely, including the web, and if you aren't using a good text editor (I've used Scite on Windows for so long I've forgotten what else is out there), compatibility will generally be maintained. Among the features of UTF-8 are that you can tell if you're at the first byte of any character or not. But, to do that, 1-byte characters can only be 0-127, and multibyte ones are all higher values for all bytes in them. So a <127 byte followed by a >127 byte, followed by a <127 byte, indicates that the 1st byte is an ASCII character, the second byte is garbage, and the 3rd byte is an ASCII character.
Now, an apostrophe is the same value in ASCII, 8859-1, CP1251, and UTF-8, at
39. So, if you use a real text editor, the ' key will should give you what you need--just do a binary transfer both ways to be safe, in case FileZilla is part of the problem--but in a word processor, that may not be the case, because Windows code page 1251 defines a
closing single quote as the value
146.
So, if "Ken's" is saved in 8859-1, with a quote instead of an apostrophe, then served up as UTF-8, you should see K, e, n, ?, s (? usually in a black diamond).