Commenting on this particular troll is probably beneath the majority of us.
However, on a general note, I think it is important to realize that every text message you send, every cell phone conversation you have, every post to the CNN forum you make, every tweet you send ... is directly attributable to your IP whether you use your own name or not. With Facebook and Google tracking everything you do, whether you are logged in or not, I would go one step further, and say all of these things are directly attributable to you personally.
I would strongly urge young people to really think about what they are putting out there. Consider this, the military was doing the equivalent of credit checks for sensitive positions during the 60s. Now you need a credit check to do ANYTHING, even things that don't require credit. How long before an internet and phone background check is standard in the background checks organizations do before offering jobs?
I can tell you the military is doing this sort of screening right now for sensitive positions, but at least you are confronted about it. It still basically ends your career, but they will give you a chance to explain your posts. In the private sector in the future, they will just deep six your application and you won't know what happened. Or they'll let you in at entry level, maybe, and subsequently you'll start running up against an invisible barrier as you try to advance beyond the first or second layer of management. Or you will find resistance to you advancing into management at all.
Also be mindful, it can affect more than your professional life. Think about what the background checks for apartments will look like in the 2020s. Or what 'dating sites' will be like in the 2020s.
Please consider your future before you make comments on ... say ... Hurricane Katrina ... that might be misconstrued. Or post an opinion on ... say ... American soldiers in Afghanistan ... that could be taken out of context and viewed in a negative light.
All that said, the absolute best defense against these sorts of situations is just not to be a douche, which isn't very hard. If a guy or girl is dead...leave them in peace. If you can't say something nice...just don't comment.
I've run forums for a long time and bullying surfaces frequently. Most trolls use Tor nowadays.
Only very rarely do I get traceable IP addresses that can be dealt with in any meaningful way, and that's usually when there is an invasion of trolls for a day or two and some drop their guard.
The only time I've ever successfully stopped a troll was about a decade ago when I traced it back to a uni and was able to raise a sysadmin.
By and large, for the majority of forums and social sites out there (which is where most bullying happens) there is very little that the admins can do even if it's reported to us.
About the only thing working is stopforumspam.com , whereby hundreds of forums are blocking IP addresses for a short while across all of the forums. It's made for spam, but I and others submit trolls to it too (when we're absolutely sure there's nothing of worth in the person and it's not just differing views).
That's a really clever way to incorporate TOR as an input, without just disabling all TOR-enabled accounts.
Were you able to get a bayesian algorithm to learn whether specific users were likely to be spammers or trolls? (As opposed to specific posts.) I'm guessing this isn't easy to do, since I get spammed on other services on a regular basis (e.g. Twitter spammers will setup new accounts and spam on a regular basis).
Tor is nice, but how long will it take until textual analysis against a large corpus becomes available to everyone with an interest? We already have fairly advanced plagiarism detection mechanisms and Bayesian spam filters; similar ideas allow author-identification.
I would, in fact, be somewhat surprised if nobody has proposed deploying this kind of thing against, say, Anonymous.
Someone could use the exact same technique to write something and have it be attributed to someone else.
Say you use an n-gram analysis at the word level. I can just build a tool that processes a corpus, and then as I type, suggests the most probable next word for me. I can simply accept those words when appropriate and look like a different person.
One of my projects was shut down by Craigslist (NotifyWire.com). I took down the site and posted the details explaining why NotifyWire was going off-line.
Six months later, while attempting to refinance my house, I was told to take my name off the mortgage application because my corporation was under legal threat by Craigslist and thus was a liability that the underwriters might cite as a reason to decline the application.
wow thats actually crazy! designing a simple bootstrap website that "failed" and later on Craiglist reported it somewhere that your future may depend on?? can you share some more details?
No, the mortgage broker did a google search on me, my wife, and my corporation. They found NotifyWire.com where I released the details of the Cease and Desist letter sent by Craigslist.
Since my income comes from my company (though mostly from consulting activities) the broker thought the underwriters might think my income future was in jeopardy.
He told me mortgage underwriters are doing web searches on applicants, which is the scary part.
Because they have no need to declare everything they base a decision on, and the decisions they make will often be made at face-value without consideration of any other information that might offer a more rounded view.
That sounds great. Would you want someone putting restrictions on what factors you're allowed to consider when you're deciding whether to lend someone an enormous sum of money?
As someone related to a mortgage underwriter I can tell you that they aren't doing Google searches on you[1]. They have all the documents they need to make a judgement call, if they don't they'll kick it back to you with a list of clarifications.
[1] This rule only applies to Fannie Freddie and the SBA which do the majority of mortgages. If you go through a private bank the rules are off.
Wouldn't it have been possible to just get Craigslist to submit some sort of documentation saying that they wouldn't sue you so long as the site remained offline?
While your advice is generally sound, I'm concerned about the implied exhortation to self-censorship of controversial, but legitimate, views that it carries with it. Really, 20 years from now, if they're doing those kinds of background checks, will the checker care more that the checkee trolled some RIP group, or espoused support for Wikileaks? Will they care more that the checkee called someone "a fucking retard" on the comments of a blog post, or that they wrote a post arguing in favor of fighting back against cops who attack citizens?
Ultimately, the obnoxious-but-innocuous comments are just noise; it's the political views that run counter to the mainstream that will raise the most flags, and which require the most freedom from censorship, self- or otherwise.
Each checker will not do the digging themselves, rather, companies like Palantir will offer info products like "Individual Troll Quotient" or "Online Political Summary" which will be used in the same way credit profiles are for lending. This already exists above board with systems like Klout, and in the background a number of companies sell services like these to marketers.
> If you can't say something nice...just don't comment.
That's great advice, both online and face-to-face. Well, probably "useful" rather than "nice", and say it respectfully; e.g. when debating an idea, arguing your position may not be seen as nice...
It is all so true that what you say can come back to haunt you. It is the responsibility that comes with the freedom (of speech, in this case). You cannot have one without the other, but while it is important to understand and accept the responsibility, you shouldn't let it frighten you into giving away your freedom.
I agree with you other than in this part:
> How long before an internet and phone background check is standard in the background checks organizations do before offering jobs?
this is a typical scenario of unaware teen: he signs up for facebook, put bunch of pictures, then get in some sort of trouble due to that (stalker, troubles at work/home/school etc). That is like a cold shower. Most teens learns from first, second, third mistake. Then they stop posting photos OR at least think twice before clicking upload.
Not only it would be incredible abuse on FB part to share your history in a way that your data could be as valuable as credit card history, I am sure people would stop using Facebook had known their _private_ facebook life stopped them from getting a credit card or some other benefits.
Many of us would say and do crazy things on Usenet (in my defense I was aged between 13-17 when I used Usenet). News servers would delete the posts after 30 days and there was little to worry about, or so we thought.
...until Deja News came out - a cached, searchable store of all Usenet posts, which Google later acquired.
Everything we put out on the Internet can and probably will be stored by someone forever.
I saw the whole episode this snippet came from, and I feel it should be mentioned that the whole thing was very much an assault on online anonymity.
The angle was that people can be very immature and very nasty via the Web when their identity is obscured, which is hardly something to dispute. However, the programme seemed to be hinting that public forums should be more regulated to prevent this being possible, which seems to be a poor alternative.
One of the more memorable snippets involved them contentiously asking a Facebook representative why Facebook can't run phonelines to manage abuse complaints from users.
Everyone who's for anonymity or pseudo-anonymity wants an all-or-nothing solution where they are totally hidden. The people who propose making everyone known want everyone to totally put all their information online with no protections.
What annoys me is there's a very simple middle solution: You are anonymous to other users, but not the company running the site. If you do something like this, then the company has the right to out you, similar to a DMCA takedown.
With that you could have those people who want to stay hidden from others, but you'd prevent idiots like this because it'd be easy for someone to report them and have them outed. It would only take a couple of bad trolls getting their "troll bit flipped" for the behavior to be reduced.
You have to use a real name, address and credit card to register an account. But in-world, no one knows who you are and there is no practical way for anyone to find your real identity. And for misbehavior (griefing or harassment), they can block you (by IP I guess) for days, weeks, or forever.
You use the phrase "outed", which to me implies public exposure, so my post is based mostly on that premise. If that's not the case, please just clarify and ignore the applicable parts.
In theory that sounds nice, and to a certain extent there are some similar systems in place already...but I just don't see it working that well overall.
We already have a system like this for sex offenders, and it's got some serious issues. In such a system it would either be up to each site to have a public "troll list" or they would have to have some central repository. In the case of the former, anytime you look at a sites list you won't be able to tell which ones are just someone who made an offensive comment or two on a single site and "repeat offenders" who are on many sites lists. You'd have to collate the lists somehow and even then you'd have troubles of differentiating the various "levels", mistaken identity, etc. In the case of a central repository that's a whole 'nother can of worms.
Then there's the question of what constitutes an "identity" for the purposes of being able to post. A name? An address? Those don't seem suitable to me for such purposes, they're too fudge-able and temporary. A State ID# or SSN seem too personal and permanent. If the former are compromised, you can limit the damage somewhat, the latter less so. There's the issue of proliferation. Even if you use addresses, every single time you put your info out there is the increased risk of it being compromised. It's not unreasonable to want to limit the amount of times you expose yourself to the risk of security breaches.
There's the issue of the practical burden it would place on the sites. This isn't so much of an issue since it would be voluntary, but it's something to consider.
I think the ethical issue of whether it's ok to out someone who obviously has enemies on the internet is a tricky one. Again it gets to the issue of what constitutes an "identity" for these purposes. If it's too specific it becomes dangerous to release, or even store...and if it's not specific enough it becomes dangerous due to confusion, or loses it's sting.
There's also the question of whether it will really deter anyone. As much as they like to try and evoke the image of a deliberate prankster, I don't think most trolls really think things out that much. It strikes me as impulsive behavior. So they get hit on one site after an impulsive trolling and just move to another one.
Assuming that it's specific, I think the possibility of some sort of violent retaliation needs to be considered. You could argue one reaps what one sows, but I don't think that's an attitude that belongs in the consideration of organizing a society. We should value protecting everyone over punishing wrongdoers. A balance needs to be struck between removing the cause of harm from society and not discarding anyone.
Overall I wouldn't be surprised if such a system comes into being. Some walled gardens where members are authenticated based on a one way hash of some personally identifiable information, or a cryptographic key-fob or something...and if they cause a ruckus their access will be revoked. However I think such a system would have enough problems that it wouldn't take off except in isolated pockets that highly value having an exclusive discussion among a verified group, and there I think it would be more about that than preventing trolling.
For trolls, I think measures similar to what is mostly used now will continue to be the main defense. The ability to moderate, vote, etc.
Well, Sex Crime Notification laws do appear to have a deterrent effect: https://www.ncjrs.gov/pdffiles1/nij/grants/231989.pdf So it may be that allowing trolls to be traced back to their original communities and their remarks publicized would be effective.
It has a small deterrent effect on first time offenders. However, the study also notes that it does not significantly affect repeat offenders, there's little difference in how "dangerous" (likely to reoffend) registered offenders are vs ones who forget/refuse to be registered, putting the list online had little effect. It also notes that in terms of community safety, it may be harmful because it encourages offenders to plea to non-sexual charges and the courts to allow them to due to the scarlet letter nature of the registry.
Systems like we are discussing can and do exist, I just don't see them as a necessary or particularly effective solution...especially when compared with the necessary trade-offs.
Look at AOL, which during it's hey-day was basically this sort of walled garden. You needed a phone #, name, address, payment combination to register. Many chatrooms were patrolled by moderators...there were even investigations and "stings" of trouble-makers. If you got TOS'd too many times you were permanently banned.
However trolling, illegal activity, pornography, etc thrived pretty well regardless. It was easy to evade the bans and there was a steady flow of new trolls.
I think the best solution in terms of respect for privacy and effectiveness is a combination of crowd-sourced moderation and intelligent detection of likely "trolling" and problem posters by software.
I feel it should be mentioned that the whole thing was very much an assault on online anonymity.
Which is kinda ironic, given that this guy apparently just doesn't care, despite knowing that his name and face are going to be broadcast on primetime BBC1.
This is an incredibly important point. There are two stances to take. First, FB can claim they are a “common carrier,” does not censor, and not responsible for what people post (just as the telephone company is not responsible for what people say on the telephone). Second, FB can pick and choose what is allowed but bears responsibility for what it allows to be posted.
Whether you like or dislike their particular choices, the very fact that they censor breastfeeding mothers means that they are accepting moral responsibility for what they decide not to censor.
I do not accept the statement that they “Abhor Nazi ideals and find Holocaust denial repulsive and ignorant. However, we believe people have a right to discuss these ideas and we want Facebook to be a place where ideas, even controversial ideas, can be discussed.” Where was that line of thinking when they censored breastfeeding?
From Facebook's standpoint I would imagine that these are viewed as completely independent policies. On one hand Facebook does not want to censor discussion, on the other Facebook does not want to host explicit images. The distinction that you draw above simply has to do with their definition of what is an explicit image.
Would you make the same argument if we were talking about Holocaust deniers and personal pornography? Because I am pretty sure that is the distinction that Facebook is making. You just disagree on what constitutes pornography. (I am not agreeing or disagreeing with either of you around that definition either way, just saying that I think you are arguing a different argument).
Yes I absolutely would make the same argument. The moment you begin to moderate your medium, you accept responsibility for it. I’m not saying anything in this discussion about whether FB is right or wrong to allow hate material, just that they cannot hide behind “We abhor it but believe we should not censor discussion.”
UPDATE: I have no idea, but I am curious: Can you write porn on FB? Soft-core? Hard-core? As suggested by another respondent, I’m pretty sure you can post a picture of yourself with skinhead tattoos. Can you post a hateful picture? Is this just words vs. pictures? Or is it ideas vs. so-called porn?
I think you are still missing what I meant.
I think it's a Content vs Pornography distinction. If we wanted to test it, I think the question to ask would be are they blocking explicit text as well, say erotic stories, or even just swear words? I know that some portions of their platform do automatically censor certain words (for example website comments), so it would be interesting to see to what level that policy is policed.
Oh---I did miss your point then, but I think the text/photo distinction would actually be somewhat easier for them to defend than a content/porn distinction; as I understand it, porn is content, as a matter of very well-settled caselaw.
As far as I understand it, the "censoring" starts when someone flags a photo for review. So if I see a photo of my friend breastfeeding and find it to be in bad taste, I flag it- it then goes on to FB for review and perhaps a takedown. No one seems to mention that in the breastfeeding scandal, SOMEONE had to flag the photo for review, which means that someone found it offensive. So it's not Facebook going through every single picture ever posted to find ones to take down, it's Facebook responding to user requests to take pictures down.
I wonder what the ratio of flagged pictures to removed pictures is. I imagine it's pretty high.
However, the original argument is flawed. Holocaust deniers fall under First Amendment rights, breastfeeding falls under obscenity law.
> Whether you like or dislike their particular choices, (...) they are accepting moral responsibility for what they decide not to censor.
Less charitable explanation: FB knows that basically nobody cares about nazis, especially after you've trotted out "free speech", while there is actually a measurable number of people who'd give up Facebook over breasts.
I have the feeling that this is the ultimate 'feed the troll' reaction. Actually caring enough to track them down, talk to them and reason with them about obviously attention-seeking and ill-meant content? Why?
Well, we can't forget that even trolls have day jobs, bosses, friends, family, acquaintances. This particular troll wrote some mean, racist, inhumane comments. He wants attention, but not to his real self, which is why he posts under a fake account.
So as a society, what do you do with such offenders? Freedom of speech wasn't really meant to protect garbage like that. Well, one option is you can publicly shame him, or at the very least engage in a real conversation to see what his logic is. The internet is a free place, but we can't become so desensitized to racism and hateful comments that we dismiss is at part of our daily lives, as nothing to be concerned about. People do end up reading nasty comments, and it does affect them.
> Freedom of speech wasn't really meant to protect garbage like that.
whether its a garbage or not, that your personal opinion. even if it would be opinion of 99% of population, you would still find people liking his writing.
I am not sure where the problem is: whether it is him writing this stuff, or others unable to ignore his writing. I don't mind trolls; they are sort of a challenge. If someone is smart enough to get intelligent individual involved in the garbage conversation, then its that individual fault. internet is a product of humanbeings and like in real life, there will always be statistical noise, an asshole in the neighborhood, one stupid dude out of 100s your working with, etc.
Checking with Wiki, troll is someone fed off someones responses to his comment. I bet if you ignore this guy long enough, he would stop trolling.
If I forget my 'ignore them' position for a second: Either they do violate the law or they don't. If they do, the law is responsible here, not you/we. If they don't, it's a rather weird idea to hunt them down and blackmail them to stop that behavior or you're telling the employer/his wife/his mom about those Facebook posts.
Let's focus on improvements to moderation systems and teach people around us that they shouldn't take random posts on the internet serious.
Well, firstly I never seriously suggested we systematically do anything to trolls as a form of real justice.
Secondly, you ( and other people) shape society's culture and norms. Trolling is a matter of culture, not just law, and it is not simply the governments responsibility or not. The people pass laws which the government enforces (or at least that's how its suppose to work in a democracy).
Thirdly, I actually made the same comment as you the other day, regarding "don't take internet comments seriously". Here's a great reply:
I will play the part of the alarmist and also be the first to invoke Goodwin's law by stating that unchecked rhetoric was the first psych-ops that the Nazi's used. Group acceptance of dehumanizing a different group is the first step down a horrible path. It should be checked at every advance. The seed in America seems to have happened with Muslims, as it was acceptable in good company to view them all as fundamentalist psychopaths, the government and media where all to willing to reinforce such group think as it strengthen their case for war, now that the seed has set it has grown to bear fruit. Vilifying or dehumanizing a group of people always leads down a path of darkness. I would not hastily blow it off as being stupid and crap said on the internet, it is a dangerous mindset, that is easily infection to those susceptible to group think and the sewers of it tend to be far too willing to act when they feel emboldened by numbers and the echo chamber.
If, like me (and the US Constitution), you think we should have extremely strong freedom of speech protections, that doesn't mean you automatically think there shouldn't be private repercussions to some speech.
I watched this last night. On the whole it was a pretty stupid programme with equally stupid people not knowing how to report and block people. Why not just ignore sites like littlegossip, and why have a formspring account.
"I love that the whole thing is narrated like they're tracking down an animal in nature. I loved the end "So, there you go, an internet troll. That's what they look like." Yep, that's what they look like." - this did make me giggle though.
The situation is analogous to the difference between the immune system of someone with limited exposure to all the normal diseases of western civ (f.ex. theo-facists, nazis, white supremacists) and someone brought up with sterilized everything and an instant antibiotics cure if he sniffs once.
EDIT: Not saying that murderous leftist radicals can't do damage, but the west's left has mellowed.
I'd like to see a Chris Hansen style confrontation, where they track down trolls, confront them with what they've written, and then see how they justify or apologize their way out of the situation.
I can't imagine a 13 year old defending his racist rants behind a veil of tears being particularly entertaining. The fact that this particular guy happened to be an adult is probably a bit of an enigma within the world of trolls.
I think you are under-estimating the percentage of trolls that are grown adults.
Then again, I grew up in a place where the local paper's letters to the editor read like a best-of collection of racist trolling (pre-internet, even.) Most of the letter writers were over 70 and still spouting ungrammatical poorly-articulated vitriol.
I think people forget that reality TV and the like are edited and spliced to appeal to the audience. Anything you see on TV is weighted to determine if its entertaining or not... should a show/news cast adopt the OP's recommendation you can safely assume that each "lead" they track down that ends up at a kid would be simply swept away because it doesn't make for good TV.
Maybe it's my American 1st amendment ideals but I'm almost more disturbed by the tone of the report than the troll (who is obviously a huge douchecanoe).
First of all, how did they track this guy down? Sure, there are legal ways of doing it if the guy is sloppy. But how does a report on the ethics of the Internet perpetrate a huge invasion of privacy without so much as passing comment on it? Disturbing implications for what actions are justified when directed at people with the "wrong" ideas.
Second, notice the reporter's repeated emphasis on the illegality of racist speech. He's not just shaming this guy. He's beating the drum of state censorship. Again maybe it's just my ideals but this is just obviously disturbing, maybe even moreso than trolls themselves.
I am not able to watch the whole program but judging from the synopsis it doesn't sound like it entails any substantive discussion of the ethics of privacy and censorship on the Internet, e.g. interviews with civil libertarians, which is what any serious report on trolling should include. As it stands it reeks of sensationalism.
In addition it somewhat upsets me that someone could get jail time for making an offensive comment. Depending on the jurisdiction or culture, I know I have said things that would be offensive to someone (specifically regarding religion.) To be faced with jail time over something like that does not sound like something I would expect from a western country.
>In addition it somewhat upsets me that someone could get jail time for making an offensive comment. Depending on the jurisdiction or culture, I know I have said things that would be offensive to someone (specifically regarding religion.)
But did you say them with the specific, primary intent to cause that person harassment, alarm or distress? I would wager not.
Note that I don't necessarily agree with the legislation, but it's an important distinction.
As commented above, yes, it is possibly illegal under UK law. For example:
(1) A person is guilty of an offence if, with intent to cause a person harassment, alarm or distress, he: (a) uses threatening, abusive or insulting words or behaviour, or disorderly behaviour, or (b) displays any writing, sign or other visible representation which is threatening, abusive or insulting thereby causing that or another person harassment, alarm or distress.
What are we to gather from this? That assholes on the Internet are also assholes in real life?
"Confronting" people like this does little to change their behavior. At best, it publicizes their identities, and causes some minor level of disgrace, but why would that matter to them?
I think it was meant to educate us, not him. I assumed trolls were spineless and used Internet anonymity to massage their egos. This guy, though, had no qualms with reconciling his online and real identities. This was interesting for me.
Further, for those trolls that do care about the perception of their real identities, perhaps pointing to something like this and saying "you sound like this guy" is enough to change behavior, albeit at the margin.
It's ironic on a site called 'hacker news' where everyone knows what the older 'proper' definition of hacker people don't seem to care or even to point out what the older, 'proper' definition of troll is.
Fortunately wikipedia still has in their definition:
In Internet slang, a troll is someone who posts inflammatory,[2] extraneous, or off-topic messages in an online community, such as an online discussion forum, chat room, or blog, with the primary intent of provoking readers into an emotional response[3] or of otherwise disrupting normal on-topic discussion.[4] The noun troll may refer to the provocative message itself, as in: "That was an excellent troll you posted".
So, a proper troll on HN might pop and point out that functional languages, while pretty and amusing, are largely unused because their performance is insufficient and make a comparison about, say, perl, provoking people to correct them and argue the point.
Look up adequacy.org to learn about proper trolling.
"a troll is someone who posts inflammatory ... messages in an online community ... with the primary intent of provoking readers into an emotional response"
Ah, good times when BBC Documentaries had actually high quality documentaries. Fascinating how their investigative journalism got all the way down to hunting people who talk trash on Facebook and exposing them. Are these really the most substantial social conflicts Britain has to worry about?
Not being familiar with privacy laws in the UK, did he have to sign a release to have his face shown on that program? If so I fail to see how its effectively shaming someone if they willingly submit to be filmed.
I have no experience in this area, other than having seen trolling on mailing lists, but my guess would be that a face-to-face confrontation with the troll would only give them further ammunition for subsequent enraged outbursts, or serve to make the situation even more dangerous/volatile.
Probably the solution is not to react to the troll, and for their outbursts to be met with silence. Don't read their content, and avoid forums or lists where trolling regularly occurs. When that's not possible report them to the list/forum admin, without engaging with the troll directly.
We've tried "Don't fee the trolls" for years: it has only lead to them becoming more vicious and egging each other on. It means the only thing they hear is how awesome they are from their friends.
It is time to engage: people with patience could engage them in good faith: instead of asking "how do you justify it?" ask "what is your life like that you feel the need to do this?" There are a number of violence-intervention programs that could be adapted.
Alternatively, we can drive them to ever-greater heights of rage by armchair psychoanalyzing them and bombarding them with ridicule: laughing at fear can make it go away. We could shun them, alienate them, mock them and otherwise make trolling unpleasant to engage in. Right now the incentive to troll is there and there is no disincentive: we need to create one.
In real life, the disincentive is that someone will take a fully-justified swing at the troll or they will be arrested for harassment, stalking and verbal assault. We need to enforce the internet-equivalent of getting punched in the face, since the government doesn't appear to enforce laws against harassment and assault online.
We don't accept it because it's abhorrent behavior but these people need mental health help. Getting them to seek help is a nearly impossible task though.
What's the difference between personality and personality disorder? Is there anyone who is not mentally ill?
Ultimately, everyone has their foibles. It's what makes humans human. Only when someone become Internet Famous, though, do we start diagnosing them with illnesses. Why is that? Why can't we say, "wow, this guy has chosen a pretty worthless hobby. too bad he can't use his time to build model trains or something"?
I agree - we're all a bit crazy; ironically, being aware of this fact is probably the most sane way of being.
The problem is, sometimes the way we interact with the world becomes so obtuse it inhibits our ability to communicate effective and form healthy meaningful relationships.
A diagnosis of what's traditionally classified as a personality disorder isn't necessarily going to elucidate any kind of universal truth, but it might provide context for self-improvement.
> The problem is, sometimes the way we interact with the world becomes so obtuse it inhibits our ability to communicate effective and form healthy meaningful relationships.
In this particular case, when the reporter caught up with the guy he appeared to have a wife and young child. So he seems to have at least one "healthy meaningful relationships". He's probably not a complete jerk when face to face with people. It's probably more along the lines of this: http://penny-arcade.com/comic/2004/03/19
I'd say the line between personality and personality disorder is when you derive pleasure out of hurting people who have done nothing to you. Even more so when they are already grieving.
I think the concept of lulz to me is funny when it comes from a place of humor rather than hatred. Perhaps it's a gray area for some people at what point it stops being funny and starts being simply malicious.
Or, more likely, people don't believe that actions on the internet do/should have any consequences.
I'm somewhat inclined to agree, especially when it's something a simple as a website comment that can be permanently erased with a single click.
It's like the people who play games like League of Legends and then help the other team or team kill or generally just play to tick folks off. There was a Wired article about this some time ago that had one line which stuck out in my mind: "They're playing the same game you are.. they are just playing to a different objective".
Personally, while I agree that trolling an obituary page is in >>incredibly<< poor taste and someone should probably clout the guy on the back of the head , invoking the government to do it is a step down a road that nobody wants to go down.
Kill the comment, report the loser, and move on. That's how you deal with idiots on the internet. Any of the other solutions mentioned here are below optimal for a number of reasons.
The tone of this article is downright chilling. Sure, this guy is a complete nuisance but he should have every right to spread his hate speech wherever property owners condone. The current anti-bullying meme that is being propagated by mass-media and politicians is just another in a long line of ruses designed to limit the human rights of the electorate.
It varies by country but assuming this troll is in the UK hate speech is not a protected form of free speech in his country. So in the UK he doesn't have the right to spread his hate speech. Now that his name is known perhaps somebody will take him to court, who knows. The US is one of the few countries where hate speech is technically protected but even still there are some exceptions including "fighting words" and harassment which could possibly be relevant.
This guy seems to be aware that his behavior would be likely to land him "9 weeks" in jail which he dismisses as an insignificant punishment.
While it would be lovely if we could find a way to get rid of harassing trolls on the internet, suppose someone on Facebook posts a comment disparaging [the founder of a certain faith known to try to assassinate people for disparaging said founder], then members of that faith might make formal complaints to the service demanding his account be deleted. So yeah, probably best to just try to make options to permanently block specific users/ip addresses from EVER posting to your feed.
I do not agree with the statements of the troll but will defend his right to say it. The internet is making the world like a small town. Piss on people on the internet will be like pissing on people in line at Walmart. You can, but people are going to hate you, and you will never be forgiven by anyone for anything bad you do unless you hire a professional to erase your histories.
This particular guy was located in Wales. Speech is protected under European and UK law, but I'd say that it's not quite the same extent as in the US. For example, there are particular laws against defamation, harassment or racism in speech. One particular law his actions may violate is this:
(1) A person is guilty of an offence if, with intent to cause a person harassment, alarm or distress, he: (a) uses threatening, abusive or insulting words or behaviour, or disorderly behaviour, or (b) displays any writing, sign or other visible representation which is threatening, abusive or insulting thereby causing that or another person harassment, alarm or distress.
Wrong. I'd defend his right to say it if he weren't anonymous. In the US you do not have a right to anonymous speech. In most countries you don't have this right. You can mostly say whatever you want in public because the people around you will know who you are and can respond.
Wrong again. First case she didn't have to put her full address on, but everyone knew who she was because she was actually walking around giving them out and she didn't mislead them.
Second case is about religious freedom and the government not having the right to out members of a religion.
Neither of the cases are about a person being totally anonymous, or someone spewing hate speech anonymously, or someone bullying someone anonymously.
But hey, people will spin this into "I now have the right to troll a guy's dead grandma."
Additionally, these are just court cases. Sure they have a slight force of law if applied narrowly to similar cases. When I say a right I mean in the constitution or an amendment. People say like to say the constitution allows anonymous speech but it doesn't by any stretch.
So, if I were to be exact: I stand corrected, there are two obscure cases that relate to two small situations not related to the internet which allows for a small amount of anonymous speech, which does not give people a total right to be anonymous all the time.
Supreme court and most lower courts have consistently ruled for anonymous speech rights. It's kind of ingrained into the national character with the Federalist Papers and all that.
AFAIK, there's never been a supreme court ruling against anonymous speech rights. There have been lower court rulings against them, usually in cases that look like stalking.
By default, all speech is presumed to be permitted under U.S. law. The First Amendment, and related court cases, concern the power of the government to limit speech--not the right of free speech itself. So, to make the case that anonymous speech is not a right in the U.S., you would need to show a case where the Supreme Court specifically ruled that the government may limit anonymous speech. I'm not aware of any such case.
You're proposing what sounds like a pretty novel interpretation of the first amendment. Do you have any evidence that it's widely accepted, or that it's correct?
However, on a general note, I think it is important to realize that every text message you send, every cell phone conversation you have, every post to the CNN forum you make, every tweet you send ... is directly attributable to your IP whether you use your own name or not. With Facebook and Google tracking everything you do, whether you are logged in or not, I would go one step further, and say all of these things are directly attributable to you personally.
I would strongly urge young people to really think about what they are putting out there. Consider this, the military was doing the equivalent of credit checks for sensitive positions during the 60s. Now you need a credit check to do ANYTHING, even things that don't require credit. How long before an internet and phone background check is standard in the background checks organizations do before offering jobs?
I can tell you the military is doing this sort of screening right now for sensitive positions, but at least you are confronted about it. It still basically ends your career, but they will give you a chance to explain your posts. In the private sector in the future, they will just deep six your application and you won't know what happened. Or they'll let you in at entry level, maybe, and subsequently you'll start running up against an invisible barrier as you try to advance beyond the first or second layer of management. Or you will find resistance to you advancing into management at all.
Also be mindful, it can affect more than your professional life. Think about what the background checks for apartments will look like in the 2020s. Or what 'dating sites' will be like in the 2020s.
Please consider your future before you make comments on ... say ... Hurricane Katrina ... that might be misconstrued. Or post an opinion on ... say ... American soldiers in Afghanistan ... that could be taken out of context and viewed in a negative light.
All that said, the absolute best defense against these sorts of situations is just not to be a douche, which isn't very hard. If a guy or girl is dead...leave them in peace. If you can't say something nice...just don't comment.