date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2016/08/13
| 761
| 2,692
|
<issue_start>username_0: I have learned a lot reading xkcd and talking about Hal9000 and Solaris over a beer. Granted: that does not make me any expert in AI, by far. But I see some value to it:
* I get to know concepts that I look up afterwards
* I reflect and try to imagine new problems and solutions
* I get another view on news or current (technical) problems I face
* I get some things to procrastinate on
What do you think about questions related to cinema, books and novels, science fiction, etc? What about jokes and funny AI-stuff?
I am not very sharp right now, but maybe something like:
**Was HAL9000 programmed to be an egoistic jerk or he just developed it by itself?**
Or:
**What is your favourite AI-joke?**
([yeah, got it here :)](https://stackoverflow.com/questions/234075/what-is-your-best-programmer-joke/234476))<issue_comment>username_1: The first question about HAL9000 I believe is on-topic either on [Movies.SE](https://movies.stackexchange.com/), [Sci-fi.SE](https://scifi.stackexchange.com/) or [WorldBuilding.SE](https://worldbuilding.stackexchange.com/questions/tagged/artificial-intelligence), but not in here, where we require some real-world questions, not related to science fiction.
The second one regarding a joke, the quote from the closure reason from that link says it:
>
> is not considered a good, on-topic question for this site
>
>
>
this is because opinion-like questions or the one which are asking something from unlimited list of possibilities ['are not a good fit for this type of Q&A site'](https://meta.stackexchange.com/a/98366/191655). As said by [@RCartaino](https://meta.stackexchange.com/a/98366/191655):
>
> Stack Exchange is well-suited to asking very specific questions that represent real problems you encounter in your day-to-day work. A big part of that process is asking very long-tailed questions; the kind where folks with specific expertise in the subject can propose the best possible answer, which is then voted on so the best possible answers rise to the top.
>
>
>
There was actually Humor site proposal, but it was [closed](https://area51.meta.stackexchange.com/q/24036/61861), because of above reasons.
Upvotes: 1 <issue_comment>username_2: No, they're not. "Getting to know you" or fun, minimal-mind questions are not a good fit for Stack Exchange. Notice how the Stack Overflow question you linked is locked. If it hadn't been locked for historical significance, it would definitely have been deleted.
Especially during the private beta, we must focus on producing quality content. For fun, try [chat](http://chat.stackexchange.com/rooms/43371/artificial-intelligence)!
Upvotes: 4 [selected_answer]
|
2016/08/13
| 391
| 1,504
|
<issue_start>username_0: This question:
* [What is early stopping?](https://ai.stackexchange.com/q/16/8)
has been closed as off-topic.
I don't see the reason why it should.
The 'early stopping', in machine learning (**branch of AI**) is used to avoid overfitting when training. Therefore I don't see this question as off-topic.<issue_comment>username_1: I was one of the close voters, and let me explain here why I voted to close.
As I, and some other users, have said *multiple times* before, we should avoid questions that are only related to machine learning. Those questions are already on-topic on both Data Science and Cross Validated.
The point of creating this site was filling a gap that was not already covered by Data Science and Cross Validated. Early stopping is on-topic on both sites ([1](https://stats.stackexchange.com/search?q=early+stopping), [2](https://datascience.stackexchange.com/search?q=early+stopping)). Remember that if this site looks to much like Data Science and/or Cross Validated it *will most likely **not** get out of private beta*.
Upvotes: 2 <issue_comment>username_2: Data science and the Stats SE already have a huge overlap (>~80%), and I am worried to have a third SE that also significantly overlaps with them, so that why I VTC.
I think the best solution would be along the lines of this proposal: [build and strengthen the Stack Exchange community with “crossover questions” between sites](https://meta.stackexchange.com/q/199989/178179).
Upvotes: 2
|
2016/08/15
| 569
| 2,362
|
<issue_start>username_0: I have provided an [answer](https://ai.stackexchange.com/a/1522/169) where I fail to find a critical source. After looking for it again today, I still cannot find it. Worse still, I have read new articles, reviewed some at the time, and cannot find any other report that *explicitly* shares the critical source's point. I did find reports that *elude* to the argument.
I have added a [warning](https://ai.stackexchange.com/a/1522/169) on that missing source. I believe it does not impact the answer value to the thread, but that missing source does impact credibility. As the accepted answer, I am thinking to delete the paragraph that mentions the source.
What should I do? Leave the warning, remove warning and paragraph?<issue_comment>username_1: We are not requiring that every answer is fully supported by sources that you can currently link in the answer. So, if you are really certain that it is in fact true, you can leave it like it is. However, in this case, you aren't really certain anymore that it is true, or at least I wouldn't be.
If you find, such as now, that it might not be true, or that in fact the opposite might be true, you might want to clarify by just adding a paragraph claiming that the opposite is true ("On the other hand, (source 1) and (source 2) claim [...]"), instead of in addition to the warning. It might be a good thing to start the other paragraph with something like "I've read this", so that it it clear that the other paragraph is properly sourced while the original one is not. You might want to add a small conclusion (i.e. I'm not certain anymore, what it is).
You can also consider asking a question about it (this is not always appropriate) and linking to this question in your answer, at least when you receive a satisfactory answer. Also, please link to the answer in your question.
Upvotes: 2 <issue_comment>username_2: If you can't verify the veracity of information, I think the safest thing to do - ethically speaking - is to annotate the information appropriately, as you've done. It's like Wikipedia's "citation needed" markers: they call out information that could be helpful, but is in need of further verification.
I agree with username_1's answer. In short, cite sources when possible, and make it clear that we might not have the right answer nailed down yet.
Upvotes: 1
|
2016/08/16
| 4,470
| 16,343
|
<issue_start>username_0: Ideally Moderators are elected by the community, but until the community is large enough to hold a proper election, we will be appointing three provisional Moderators to fill those roles.
We need your help. Please nominate folks you would like to see become provisional moderators for this site. Your input will provide valuable insight to help us make our selections. You can read more about the process here: **[Moderators Pro Tempore](http://blog.stackoverflow.com/2010/07/moderator-pro-tempore/).**
The Nomination Process:
-----------------------
* **Nominate a user** by posting an 'answer' below. Each nomination should be a separate answer. Use the template at the bottom of this post to complete your nomination.
* **Self nominations are encouraged.** This is a volunteer activity, so users should not feel obligated to accept these positions. A self-nomination is simply a way to say, "I am very much interested in this, so let my record speak for itself."
* **Tell us about the candidates.** Nominations can include links to other activities like Area 51 participation, participation in other sites, or any relevant thoughts/links that may help us make an informed decision.
* **Nominee should indicate their acceptance** by editing the answer to **accept/decline** the nomination. Nominees: please ensure your profile email is correct so we can contact you. Optionally, you are encouraged to write a bit about yourself following your acceptance.
>
> I accept/decline this nomination.
>
>
> Hi, I am name/location/fun fact (all optional). I live in , so I am generally active on this site from to . Some other things you may want to know about me are…
>
>
>
Here is what we'll be looking for in a Moderator candidate:
-----------------------------------------------------------
We are looking for members who are deeply engaged in the community's development; members who:
* Have been consistently active during the earliest weeks of this site's creation
* Show an interest in their meta's community-building activities
* Lead by example, showing patience and respect for their fellow community members in everything they write
* Exhibit those intangible traits discussed in [**A Theory of Moderation**](http://blog.stackoverflow.com/2009/05/a-theory-of-moderation/)
---
Nomination Template
-------------------
To nominate a candidate, copy and paste the text below as an answer and complete your nomination writeup:
>
> [](http://ai.stackexchange.com/users/<strong>UserID</strong>)
>
> [](http://meta.ai.stackexchange.com/users/<strong>UserID</strong>)
>
>
> ###Notes:
>
>
> This nominee would be a good choice because …
>
>
>
<issue_comment>username_1: [](https://ai.stackexchange.com/users/42)
[](https://ai.meta.stackexchange.com/users/42)

### Notes:
Currently most voted and dedicated user with the relevant knowledge and skills about AI. In addition, he's working in this research area, so he knows what he's talking about. His skills may help to improve quality of this site.
EDIT by NietzscheanAI (formerly known as user217281728):
Most kind, thanks. I'm happy to accept this nomination and want to work to make this a informative and useful site. I live in the UK, so tend to be active on the site between 07.00 and 23.00 GMT. My varied career has included games software company owner, generative music developer, software architect, pure mathematician and (for the last 13 years) AI researcher.
Upvotes: 5 <issue_comment>username_2: [](https://ai.stackexchange.com/users/75)
[](https://ai.meta.stackexchange.com/users/75)
[](https://stackexchange.com/users/3364317/ben-n)
### Notes:
This nominee would be a good choice because of his active involvement in the community's development during the private beta and his experience on Stack Exchange!
I'll step right up and offer my services to the community as a moderator pro tempore. I confess that I'm just an enthusiast when it comes to artificial intelligence, but I have been highly active here on meta, gaining the community's first silver badge: [Convention](https://ai.stackexchange.com/help/badges/68/convention). I thoroughly enjoy reviewing and I have been working the queues since the site's beginning. I've also spent a large (probably unhealthy, heh) amount of time reading Meta Stack Exchange and the SE blogs, so I'm familiar with the Stack Exchange model, the software, and the expectations for the various roles. I'm also active on [Meta Super User](https://meta.superuser.com/users/380318/ben-n?tab=topactivity), for what it's worth.
I live in Illinois (midwestern United States), so I'm usually awake from UTC 15:00 to 3:00. You can read about the things I've created in my profile. I have a blog [on which I mentioned the site a while back](https://fleexlab.blogspot.com/2016/08/ai-stack-exchange-site.html).
I've been doing what I can to make sure this site survives, and that has required casting a few close votes. Hopefully I haven't come off as too much of a maniacal ruthless reviewer `:)`. When asked on meta, in comments, or in [chat](http://chat.stackexchange.com/rooms/43371/artificial-intelligence) about why a question is closed, I always write up a helpful, respectful explanation. If I ever do something you think is less than ideal, please feel free to ask me about it! Like all humans (though perhaps not AIs!) I make the occasional mistake, and when I see that's happened, I make it right.
I have my own opinions and judgments, of course, but I would be happy to carry out as moderator pro tempore the consensus of the community, the mod team, and Stack Exchange. We're all in this together.
It's a pleasure building this community with everyone here. I look forward to continuing to the next stage of site growth with y'all!
Upvotes: 4 <issue_comment>username_1: [](https://ai.stackexchange.com/users/10)
[](https://ai.meta.stackexchange.com/users/10)
[](https://stackexchange.com/users/555192)
### Notes:
The second most voted and active user, data scientist with the right skillset across different AI branches. His answers are reliable and interesting. His skills can be a great asset to improve quality of this site.
EDIT by <NAME>: Thanks for the nomination! I'm pleased to accept it. I'm interested in helping this site help people better understand AI and the issues surrounding it, both through direct effort and community building. I've been clearing out review queues here as soon as I got access to them, and that's typically the first thing I check after my comment inbox.
I'm currently in Austin, Texas, and so would typically be online from to about noon to 2am UTC. I've been doing machine-learning related work for, depending on how you count it, about 8 years now, mostly as a student but now also as a data scientist. My research effort has mostly been in numerical optimization, machine reliability, and time series analysis, rounded out by my personal interests in psychology, economics, and philosophy. I've been interested in intelligence for as long as I can remember, and that grew to encompass artificial intelligence as soon as I was introduced to it.
To a large degree I 'grew up on the internet'; forum-posting has been a major hobby for over half of my life at this point. I've consistently had a reputation for being polite, calm, and open-minded; qualities that I hope would serve me well as a moderator.
Upvotes: 4 <issue_comment>username_3: I'll volunteer myself.
[](https://ai.stackexchange.com/users/33/)
[](https://ai.meta.stackexchange.com/users/33/)
[](https://stackexchange.com/users/48222/)
### Notes:
This nominee would be a good choice because - he is passionate about AI and its potential applications for improving the human condition. This nominee is also a strong supporter of open exchange of scientific knowledge and technology, as expressed in the Open Source, Open Web, Open Data, Open Science and Open Hardware initiatives. This nominee has been participating in multiple Stack Exchange communities for many years.
You could consider this nominee to be the "ruthless NON closer" as he believes that closing questions is generally harmful to the community, as it is perceived as an aggressive and hostile act by whoever posted the question. This nominee believes that "bad" questions can simply be down-voted and allowed to die from lack of activity in *almost* all cases.
This nominee believes we can strike a balance between being "beginner friendly" and still keeping things interesting enough to attract experts, but believes that it will take some time to establish our presence in the AI world and attract the high-level researchers and others of that ilk.
---
Since I volunteered myself, it should go without saying that I accept this nomination.
Hi, I am Phillip. I live in Chapel Hill, NC, so I am generally active on this site from around 10:00am through 1:00am Eastern time. Some other things you may want to know about me are: I am founder / president at [Fogbeam Labs](https://www.fogbeam.com), an open source software company. I was a volunteer firefighter for many years and was Assistant Fire Chief of my department for the last couple of years I was there.
I am the founder/organizer of the Research Triangle Park "Semantic Web / Artificial Intelligence / Machine Learning" Meetup here in the Raleigh/Durham area. I'm also active on [Github](http://username_3.github.io) and [Hacker News](https://news.ycombinator.com/user?id=username_3).
Upvotes: 3 <issue_comment>username_4: [](https://ai.stackexchange.com/users/5)
[](https://ai.meta.stackexchange.com/users/5)
---
### Notes:
While I'm not the most knowledgeable about AI, and don't have the highest reputation level, I know a lot about moderating.
In the past three days, every single day, I've cleared all the review ques I have access to. I currently own two organizations, and I moderate, or lead, both of them. I have 2 pending proposals on Area51. I'm active on the Stack Exchange sites almost every single day. I have former experience from moderating as a former FPC from Scratch.
It would be an honor to be a moderator on this site.
Thank you for reading.
Upvotes: 0 <issue_comment>username_5: [](https://ai.stackexchange.com/users/8)
[](https://ai.meta.stackexchange.com/users/8)
[](https://stackexchange.com/users/22370)
### Notes:
This nominee would be a good choice because username_1 is a very active user, a person who knows a lot about AI, and, I feel, cares about helping this community grow. username_1 should be one of our moderators - even if his English isn't perfect, I still think he's perfect mod material. :)
---
First of all, I would like to thank you for nomination and I am pleased to take the responsibility of being a pro tempore mod. I believe that this site has a unique opportunity to make a huge impact to global technology market driven by artificial intelligence and our everyday life in the very near future by sharing advanced knowledge accessible for all.
I have been using SE for over 7 years, I am experienced across a variety of fields and I am familiar with moderation tools and I understand their purpose.
I am an experienced software engineer specialising in a variety of information technology stacks with over 18 years experience consulting across a range of sectors and multination companies. One of the recent one is planning to ['to deploy drone army'](http://www.ft.com/cms/s/0/5ea4c668-1364-11e6-91da-096d89bd2173.html#axzz4HXDwufht) worldwide which can expand our scope of understanding of artificial intelligence (e.g. imagine flying drones in the restaurant and delivering your food to your table after pressing a single button). Check also my [user CV profile](https://stackoverflow.com/cv/username_1).
My first AI program was a chat bot written over 18 years ago in Pascal with custom written assembler libraries in order to make my school mates believing that they are chatting on IRC with real people, while being on the computers without any internet connection, so other can play games on spare computers with the real network. This worked, for the first 15-30 minutes, later on they could find out that something was wrong or get bored. Second project was involved AI bots protecting IRC channels. I did some AI in games. Since then I am interested in practical applications of AI. This is my long term hobby and interest. Further projects required more sophisticated requirements. Currently I am working on integration AI with the financial algorithms and systems.
I am good team player, so I am able to cooperate with other mods, I'm also available on daily basis (GMT/DST time). I hope we can improve this site by keeping it away from chaos, spam and trolls, to provide high quality site.
Upvotes: 0 <issue_comment>username_5: [](https://ai.stackexchange.com/users/145)
[](https://ai.meta.stackexchange.com/users/145)
[](https://stackexchange.com/users/5129611)
### Notes:
I would like to offer my services as a pro-tem moderator on this site. I have watched been a relatively active member since I joined on Day 0. I have 135 edits (counting tag-only edits), I was the first one to earn the [Strunk and White badge](https://ai.stackexchange.com/help/badges/12/strunk-white), I am the top reviewer for both [Close Votes](https://ai.stackexchange.com/review/close/stats) and [Reopen Votes](https://ai.stackexchange.com/review/reopen/stats) on the main site, I was the first reviewer of [Late Answers](https://ai.stackexchange.com/review/late-answers/stats), and I was the first reviewer on Meta. I have watched Meta, and pitched in when I could.
I was also one of 25 users to earn the [Beta badge](https://ai.stackexchange.com/help/badges/30/beta), which means that I was an active user in the Private Beta. I now also have the [Convention badge](https://ai.stackexchange.com/help/badges/68/convention), which means that I've been active here on Meta.
I may not know so much about AI, really, but I do know enough to be able to tell if something answers the question or not, I think. :)
Also, I am one of the only users who has ventured onto [chat](http://chat.stackexchange.com/rooms/43371/the-singularity) :P
I am also active on this Meta, the Puzzling Meta, and the main Meta\*.
I am fairly well-versed in the content in the Help Center and site policy, as well.
\* Okay, I mostly flag things as off-topic. But I have asked/answered some!
**About Me**
I'm a 14 year-old kid. The only moderation experience I have is being an admin on 3 Wikias. (Not popular ones - little outdated backwater ones. :P) I live in the UTC+2/3 time zone, although I'm often on late.
I don't go to school; I'm homeschooled.
I am not a programmer.
I have been using SE for a year and 11 months, roughly, so I have a pretty good idea about how the site works :P.
Upvotes: 4
|
2016/08/17
| 158
| 635
|
<issue_start>username_0: This tag doesn't really seem to be much use. [brain](https://ai.stackexchange.com/questions/tagged/brain "show questions tagged 'brain'") would seem a more appropriate tag for a site like biology.SE.<issue_comment>username_1: I think a lot of topics about AI/ANN wants to achieve a brain simulation, so maybe we can rename it to: [brain-simulation](https://ai.stackexchange.com/questions/tagged/brain-simulation "show questions tagged 'brain-simulation'").
Upvotes: 1 <issue_comment>username_2: What would be the difference between brain-simulation and neuromorphic-computing tags?
Upvotes: 3 [selected_answer]
|
2016/08/17
| 957
| 3,594
|
<issue_start>username_0: [deepqa](https://ai.stackexchange.com/questions/tagged/deepqa "show questions tagged 'deepqa'") is just another name for [watson](https://ai.stackexchange.com/questions/tagged/watson "show questions tagged 'watson'"). Can we perhaps merge these tags, with [watson](https://ai.stackexchange.com/questions/tagged/watson "show questions tagged 'watson'") being the real one?<issue_comment>username_1: Yes, it's pointless to have two tags referring to the same thing. Since [deepqa](https://ai.stackexchange.com/questions/tagged/deepqa "show questions tagged 'deepqa'") had three questions and [watson](https://ai.stackexchange.com/questions/tagged/watson "show questions tagged 'watson'") had four (and all but one DeepQA question had the Watson tag already), I manually merged the tags together by removing [deepqa](https://ai.stackexchange.com/questions/tagged/deepqa "show questions tagged 'deepqa'").
One could make the argument that DeepQA is the research project while Watson is the product, but all of the questions tagged [deepqa](https://ai.stackexchange.com/questions/tagged/deepqa "show questions tagged 'deepqa'") were about Watson.
Upvotes: 4 [selected_answer]<issue_comment>username_2: *Watson* is the name of the computer, and *DeepQA* is the name of the technology and software. They both correlated, but *Watson* sounds like more specific, but on the other hand there are no any known computers which are using *DeepQA* which aren't called *Watson*.
We do not know if there are any other computers which uses *DeepQA* technology, but not related to *Watson*. There could be some implementation of *DeepQA* not being called *Watson*. To simplify things, both terms can be synonyms where [watson](https://ai.stackexchange.com/questions/tagged/watson "show questions tagged 'watson'") should be the main tag, since it is more popular (it has its own Wikipedia page, where *DeepQA* does not).
More detailed information about the differences check [@Avik post](https://ai.meta.stackexchange.com/a/1180/8) and the following answer:
* [Are there any DeepQA-based computers other than Watson?](https://ai.stackexchange.com/q/1665/8)
Upvotes: 2 <issue_comment>username_3: I would be careful to merge the two together. [deepqa](https://ai.stackexchange.com/questions/tagged/deepqa "show questions tagged 'deepqa'") is very much just that - a deep learning approach to questions and answers. This covers NLP, hypothesis formation, candidate answer generation, and answer selection from the candidates. It is fully limited to that domain.
These pages show what I'm getting at:
<https://www.research.ibm.com/deepqa/deepqa.shtml>
<http://researcher.watson.ibm.com/researcher/view_group_subpage.php?id=2159>
<http://researcher.watson.ibm.com/researcher/view_group_subpage.php?id=2162>
<http://researcher.watson.ibm.com/researcher/view_group_subpage.php?id=2160>
On the other hand, [watson](https://ai.stackexchange.com/questions/tagged/watson "show questions tagged 'watson'") is this titanic over-arching project that dips into culinary arts, healthcare, and more recently education and other topics I'm sure I'm missing. It is the foremost product of IBM's cognitive computing research and has numerous applications and uses, and elements that construct it. It goes well beyond just the QA portion (which is an integral part of Watson, but not the entirety or even nearly a synonym of Watson).
For this reason, I personally think they are certainly different topics, but being new to stack exchange I'm not sure how you would like to handle this.
Upvotes: 2
|
2016/08/21
| 1,091
| 4,462
|
<issue_start>username_0: If you browse through the [scope](/questions/tagged/scope "show questions tagged 'scope'") tag here on meta, you'll see that our scope might not be entirely obvious from the site title. When we open to the public, though, it's really important that we can quickly summarize our scope. Not everybody will have the patience to go through all our meta discussions before posting. Therefore, I think we should try to boil our consensuses down into a sentence or so, suitable for putting on the "sign up" banner.
For example, here's [Super User](https://superuser.com/)'s, emphasis mine:
>
> Super User is a question and answer site **for computer enthusiasts and power users**. Join them; it only takes a minute
>
>
>
[Programmers](https://softwareengineering.stackexchange.com/):
>
> Programmers Stack Exchange is a question and answer site **for professional programmers interested in conceptual questions about software development**. Join them; it only takes a minute
>
>
>
[Data Science](https://datascience.stackexchange.com/):
>
> Data Science Stack Exchange is a question and answer site **for Data science professionals, Machine Learning specialists, and those interested in learning more about the field**. Join them; it only takes a minute
>
>
>
What should we have in that spot? As [mentioned by wythagoras](https://ai.meta.stackexchange.com/a/1198/75), we do have a default already in the tour, but do we need to adjust it after our meta deliberations?
(This is the fourth [real essential meta question](https://meta.stackexchange.com/a/223675/295684) for private beta sites.)<issue_comment>username_1: Taking this from the tour and initial Area 51 description:
>
> Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in a purely digital environment.
>
>
>
Upvotes: 1 <issue_comment>username_2: I am in the process of writing up the final review of this site. In it, we discuss the difficulties this site is having with scope — mostly around the *popular fallacies* of what AI actually is. Artificial intelligence is very different from how it’s portrayed in the movies. Whenever a problem becomes solvable by a computer, people start arguing that it does not require intelligence at all… and "as soon as it works, no one calls it AI anymore" — *<NAME>*
As such, this community is having difficulty navigating that narrow gap of what I'd call "AI relevance".
The proposal that created this site was intentionally placed in the *'scientific'* category. If you accept that we are not creating another programming site, I think we stumbled upon in interesting niche that describes the original premise of this site nicely:
>
> Artificial Intelligence Stack Exchange is a site with a social and scientific focus on "Advanced Computing in Society."
>
>
>
Think about it. With autonomous cars, smart surveillance, and "the next big thing" capturing the headlines, this isn't a terrible idea for a subject. Draping it in the popular AI label gives it a better focus… and it completely disambiguate that **this is *not* a technical implementation or programming site.** We already have that.
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> Artificial Intelligence Stack Exchange is a question and answer site **for people interested in conceptual questions about non-biological agents**. Join them; it only takes a minute
>
>
>
Upvotes: -1 <issue_comment>username_4: I think that science without mathematics is usually impossible, science without technology is very difficult (otherwise how to talk about computers for example) but science without programming/implementations is possible.
The emphasis would then be on the concepts and/or abstractions.
So kind of:
* pseudocode is okay, real code not
* algorithms are okay, implementations not
* Math is okay as long as the concepts remain abstract.
How to put that into a single line?
>
> Artificial Intelligence Stack Exchange is a site **for people
> interested in social, conceptual and scientific questions about Advanced
> Computing**. Join them; it only takes a minute
>
>
>
I feel this tries most to keep away from any implementations. But I also feel the limit should only be implementations, not higher level programming, algorithms, maths or statistics.
Upvotes: 2
|
2016/08/24
| 531
| 2,222
|
<issue_start>username_0: I've the feeling this would be opinion based or too broad, so I come here for community review before writing a more complete question.
The root of the question is where to put the limit between "automated system" and "artificial intelligence".
For example, would an hybrid car able to start by itself a generator to charge back after a period of use/battery level could be called an artificial intelligence and if not at which point could we start talking about artificial intelligence ?
If this happen to be on-topic, which would be the relevant tags ?<issue_comment>username_1: It appears that there is at least one question like this on the site:
[Are Siri and Cortana AI programs?](https://ai.stackexchange.com/questions/1461/are-siri-and-cortana-ai-programs)
So I guess it would be okay - as long as you are asking **about one** (or two) **specific thing**(s).
For asking about in general when something is AI, that has already been asked: [What are the minimum requirements to call something AI?](https://ai.stackexchange.com/questions/1507/what-are-the-minimum-requirements-to-call-something-ai)
The tags... Now that's the problem. That might be worth its own Meta post.
And welcome to AI!
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to what was said by Mithrandir, I would personally say it's best that such a question focus on **only one thing**. In other words, questions that ask about an aspect of each item in a big list of things would be less than ideal. In the case of Siri and Cortana (smart personal assistants, basically), they're very similar products, so it makes sense to have one question for them.
It would be even better if such questions included **specific features** of the objects/products that the question owner suspects may produce AI. That shows research effort, and in discovering the relevant features, the person who asks might stumble upon an interesting insight themselves. It also has the benefit of covering all products that have that feature (having wide applicability yet focused scope tends to mark great questions in my experience), so we might not even need to name Siri and Cortana in the question title.
Upvotes: 2
|
2016/08/30
| 861
| 3,625
|
<issue_start>username_0: There's been some comment discussion as to whether a couple of questions e.g. [this one](https://ai.stackexchange.com/questions/1784/using-feature-learning-for-a-medical-text-classification-problem) and [this one](https://ai.stackexchange.com/questions/1783/how-to-represent-a-large-decision-tree) have been on topic.
In my opinion:
1. We should take care not to readily dismiss technical questions
as being 'programming related'.
2. It's worth asking whether (even if the question mentions a specific
technique) it could be answered with reference to open issues in AI.
For example, quite a number of questions (most of which have, in my opinion rightly, been left open without issue) are concerned with how to choose features for learning. In one respect, this is the single biggest issue facing AI: the current vogue for DL approaches is precisely because of the progress they claim in this area.
In particular: *the data science community has not solved this problem* - they are in general consumers of relatively stable research, rather than at the cutting edge, as is the case for AI.
Hence, we maybe shouldn't dismiss these things as implementation if they can usefully be treated conceptually.
Perhaps we can use "Is this a solved problem (in research terms)" as a heuristic to help us here. There's certainly precident for this: it is precisely the distinction between the 'Mathematics' and 'Math Overflow' SE sites.<issue_comment>username_1: When I think of "implementation", things like math and code come to mind, while the larger components of AI construction don't fall under that category. Selecting features to build an AI for a certain purpose would therefore be on-topic, though they could easily be too broad. [Your first example](https://ai.stackexchange.com/q/1784/75) approaches "how do I solve this important problem with AI?", which possibly requires a deep knowledge of that field.
Questions tangentially related to programming, but not actually about the coding of the AI itself, are also OK. [Your second example](https://ai.stackexchange.com/q/1783/75) asks how to represent part of an AI's state for debugging visualization. It's a pretty neat question in my opinion, landing squarely in the science part of artificial intelligence.
I would be a little wary of allowing questions about the fine details (i.e. the mathematical/statistical mechanics) of yet-to-be-solved research problems, as those are likely to be much better served at one of the math-heavy sites. Conceptual questions about what kinds of things they work on are interesting and well-suited to our site.
Executive summary: if a question has mathematical formulae or computer code as critical elements, the best home for it is *possibly* a different site. This answer contains a lot of weasel words to emphasize that's it's not at all a rulebook that applies everywhere. Such an answer would be a tome.
Upvotes: 2 <issue_comment>username_2: I would say that it's a judgment call on a case by case basis. I don't think there's a simple rule you can implement that can capture all of the nuance involved here. My feeling is, unless you say with pretty close to **absolute certainty** that a question which includes code would get a better answer somewhere else, it's better to err on the side of leaving it alone.
That a question might contain math is, to me, nearly completely irrelevant to whether a question belongs here or not. Irrelevant in that it's orthogonal to the issue of whether something is "conceptual" or "implementation". After all, math **is** the language of science.
Upvotes: 2
|
2016/09/02
| 765
| 2,790
|
<issue_start>username_0: For example:
* [Why would an AI need to 'wipe out the human race'?](https://ai.stackexchange.com/q/1824/8)
There is already the website for [philosophical question](https://philosophy.stackexchange.com/), however here we can have more direct answers from the AI experts.
Should we allow such questions?<issue_comment>username_1: Yes.
If we send away everyone asking about philosophy; send everyone asking about feature selection for ANNs to data science and send everyone asking about AI research institutes to chat then there's really not so much left to talk about.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, but I think many philosophical questions would be better off on Philosophy SE. It depends on the type of question. Questions that AI experts have mostly thought about (like ["Why would an AI need to 'wipe out the human race'?"](https://ai.stackexchange.com/q/1824/8)) are better suited here, while questions that are tangentially related to AI but are really referring to "philosophical concepts" ([robotic free will](https://philosophy.stackexchange.com/questions/37442/are-robot-rebellions-even-possible) and [AI creativity](https://philosophy.stackexchange.com/questions/11450/can-computers-be-programmed-to-be-creative/15617#15617)) are better left to the philosophy experts.
Upvotes: 2 <issue_comment>username_3: 1. The [site proposal](http://area51.stackexchange.com/proposals/93481/artificial-intelligence) on the Area51:
>
> "For conceptual questions about life and challenges in a world where
> "cognitive" functions can be mimicked in purely digital environment."
>
>
>
This very clearly includes the border to the phylosophy.
2. *Don't narrow the site topics.*
It results only a mass of people leaving the site disappointed after the closure of their first questions. With them, we lose not only their content, but also the content they could have made if their first experiences had been better.
There is a so-named "common sense", what belongs to AI. It is what an ordinary people, who doesn't even know that a meta site exists, thinks what is AI. In my opinion, *the topic of the site shouldn't ever be narrowed significantly below this "common sense"*.
3. Pragmatical reasons.
Currently we are absolutely not in the position where we could have the luxury to close questions. Later it may be better, but (1) and (2) will stay even then.
---
Note, I don't really like philosophical questions. I think the AI is more like on the engineering/science border as philosophical thing. If the site would seem to sink in the mess of endless philosophical debates, I would suggest to make a *little* limit (for example, to use the VtC as duplicate votes more rigorously), but this is not the case (now).
Upvotes: 2
|
2016/09/03
| 826
| 2,956
|
<issue_start>username_0: As we can see the current [stats of the site](http://area51.stackexchange.com/proposals/93481/artificial-intelligence):
[](https://i.stack.imgur.com/VvdrO.png)
Alos, we've had similar proposal which all went in vain:
1. [Closed after 12 days in beta](http://area51.stackexchange.com/proposals/6607/artificial-intelligence)
2. [Closed after 18 days in beta](http://area51.stackexchange.com/proposals/57719/artificial-intelligence)
What should be done to maintain a healthy site?<issue_comment>username_1: Don't worry! We've passed the private beta mark, while the sites you mentioned were closed during that stage. That indicates that Stack Exchange reviewed our progress and determined that we're doing well enough to continue into *public* beta, which is [where we are now](https://ai.meta.stackexchange.com/q/1202/75).
Regarding the Area 51 stats: those goals are what you should expect from a site that's about to graduate fully. In days of old, it was expected that graduation would happen at 90 days in or else the site would indeed be closed. Now, sites can stay in beta as long as necessary. For more information, see [Graduation, site closure, and a clearer outlook on the health of SE sites](https://meta.stackexchange.com/q/257614/295684).
All that said, we should be promoting this site and growing the community. Asking quality questions and providing great answers is an excellent way to improve the site. We're collecting ideas for site promotion here: [How do we promote this site?](https://ai.meta.stackexchange.com/q/1/75)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Recently we've gone through very critical private stage where 3 attempts since the last 6 years failed to success. See: [No AI in Area51](https://blog.stackoverflow.com/2010/12/no-artificial-intelligence-in-area-51/).
Since we've successfully passed the final review process, we've now more time to improve and expand our site to match the healthy state, before graduating to full site (it can take months or even years to achieve that stage).
If you check [All sites statistics](http://stackexchange.com/sites#questionsperday) and compare to other sites and take into the account that we've just entered the public beta, so it's not so bad as it looks (>30 sites with less questions asked per day). It just takes time for new people to join and starting using the site, not everybody knows about it yet.
As [@Robert](https://ai.meta.stackexchange.com/a/1199/8) mentioned few weeks ago:
>
> we stumbled upon in interesting niche that describes the original premise of this site
>
>
>
Currently we are in stage of clarifying the scope as per: [How can we quickly describe our site?](https://ai.meta.stackexchange.com/q/1197/8)
Instead of worrying about it, we should ask ourselves: [How do we promote this site?](https://ai.meta.stackexchange.com/q/1/8)
Upvotes: 2
|
2016/09/03
| 867
| 3,723
|
<issue_start>username_0: I have been thinking about the *"shelf life"* of the questions & answers here, and have the following observations:
**1.** Artificial Intelligence is a rapidly changing, very active research area. I think there are questions open... I mean without *current* answer, to say it coarsely. I can imagine, that some answers will turn out to be out-of-date or be outperformed many times. It is possible that in one month or one year we get a very different answer, because **(a)** Some people are researching actively and discovered something amazing, or **(b)** New users come to the site (and knew of a better answer).
**2.** AI.SX is definitely different from other sites in stackexchage, because the questions are not like *quickies*. It is not like *I need to solve this urgent issue now, how do I do it?*. Many questions have different answers, which often complement themselves. Also, from the comments on an answer (or question), it can be edited to include new points and will be better.
**3.** The former point is much more noticeable, since this site is about *science* and not *technology*. The topic about specific algorithms or techniques has been discussed here on several meta questions. A side-effect is that the questions tend to be (in my opinion) broader. I personally think that is OK, and wish for a certain discussion rather than **the** answer.
Seeing all that, I think that many questions could be left open for... Well, like forever. Because many are *active* questions, which cannot be *solved* like in other sites of the network:
**Question → Answer → `hasaccepted:yes`**
Perhaps that could lead to more answers in community wiki, to which one comes (next month) after reading some new things or hearing another conference?
Or we just get new answers to questions with an accepted answer and switch (the checkmark) if the new is better?
What do you think will happen?<issue_comment>username_1: >
> Or we just get new answers to questions with an accepted answer and switch (the checkmark) if the new is better?
>
>
>
**Yes.** That's *exactly* what we'll do. It makes no sense to do it any other way.
If we have community-wiki answers to everything, than that complicates the reputation system, also - not enough people will reach new privilege levels.
As the field grows and changes, so too the site - we'll have new questions, and new answers to old questions.
Upvotes: 2 <issue_comment>username_2: This is something to consider even on very technical sites like Stack Overflow. New developments (e.g. new language features) allow new and better solutions to problems. That's yet another reason why questions should allow new answers even after one is accepted. The accept mark indicates that the answer is the best for the question poster at the time. In some cases, the question poster vanishes, never to be seen again on the site. Fortunately, we have something else to measure answer usefulness:
**Votes.** Posts accept votes forever (in most cases), and new answers (among other events) push the question back onto the site front page so it can be examined anew. Community members should definitely read new posts and vote on their quality. In an ideal world, better answers would always overtake old decent answers in score, but that doesn't always happen. If you see a really awesome answer going unnoticed, you might consider [placing a bounty](https://ai.stackexchange.com/help/bounty)!
As Mithrandir mentioned, community wiki isn't ideal for this scenario, since it has the undesired effect of disabling reputation changes. Newer users should add new takes on the issue via new answers (or possibly comments, if the changes are tiny).
Upvotes: 2
|
2016/09/18
| 448
| 1,796
|
<issue_start>username_0: It seems to me that the visit stats have tailed off quite dramatically over the last week or so: upwards of 400 down to 200 or so.
The number of new questions also seems to have diminished, so maybe now is a good time for us all to start asking new ones with the kind of enthusiasm that [kenorb](https://ai.meta.stackexchange.com/users/8/kenorb) brought to the party when the site launched?
For my part, I'm travelling today and tomorrow, but will attempt to come up with something meaningful thereafter.<issue_comment>username_1: More questions certainly would be great, but a low-activity period (the length of which varies from site to site) after the public beta start is normal. For more information, see [What is the typical growth pattern of a new beta site in the first few weeks?](https://meta.stackexchange.com/q/227007/295684)
If I perceive correctly, we did get something of an extra boost from "can a paradox kill an AI?" being in Hot Network Questions for a few days. It would be great if we could produce more content that's both high-quality and interesting to a lot of people.
So yes, if anyone has additional well-thought-out questions in mind, we would be happy to have them!
Upvotes: 4 [selected_answer]<issue_comment>username_2: There's definitely been a fall-off, but as others have said, it will take time for people to find the site and become engaged. I feel like one of the most important thing to do in the meantime is keep asking *some* quality questions, and/or get additional answers to existing questions, such that first time visitors won't perceive the site as dormant. From a network science POV, we want a "preferential attachment" sort of scenario, where new nodes attach themselves to this node and grow our network.
Upvotes: 1
|
2016/09/22
| 995
| 3,608
|
<issue_start>username_0: What should we have in *[Help Center > Asking](https://ai.stackexchange.com/help/asking)* section regarding [What topics can I ask about here?](https://ai.stackexchange.com/help/on-topic)
For example [Stats SE](https://stats.stackexchange.com/help/on-topic) has this:
>
> CrossValidated is for statisticians, data miners, and anyone else
> doing data analysis or interested in it as a discipline. If you have a
> question about
>
>
> * statistical analysis, applied or theoretical
> * designing experiments
> * collecting data
> * data mining
> * machine learning
> * visualizing data
> * probability theory
> * mathematical statistics
> * statistical and data-driven computing
>
>
>
And here is /help/on-topic at [Data Science](https://datascience.stackexchange.com/help/on-topic):
>
> Examples of questions that are likely to be on-topic for Data Science
> Stack Exchange:
>
>
> * Given process monitoring data arriving every 10ms, what statistical tool should I use to best characterize a change in the process - mean?
> a distribution?
> * When is it suitable to apply L1 regularization for feature selection?
> * I would like to produce a infographic on the 'Brexit' referendum. Given public opinion data across the UK, what are some meaningful
> techniques to visaualize it in a dashboard?
> * When executing an ARIMA model in Spark, what are the pros and cons of using Python instead of R?
> * Given Facebook Likes, is there an ML technique to predict age and gender?
>
>
>
If we would like to differentiate from the above sites, we should have our unique section about the topics which people can ask about here.
What description of [/help/on-topic page for AI site](https://ai.stackexchange.com/help/on-topic) would you suggest?<issue_comment>username_1: Drawing on these existing discussions:
* [How can we quickly describe our site?](https://ai.meta.stackexchange.com/q/1197/75)
* [Should philosophical questions related to AI be on-topic?](https://ai.meta.stackexchange.com/q/1221/75)
* [A friendly reminder that this site comes from the Science category](https://ai.meta.stackexchange.com/q/1141/75)
* [How this site is different from Cross Validated?](https://ai.meta.stackexchange.com/q/1123/75)
Also taking some inspiration from [the Super User "on topic" page](https://superuser.com/help/on-topic), here's my first stab at it:
>
> If you have a question about...
>
>
> * social issues in a world where artificial intelligence is common,
> * conceptual aspects of AI, or
> * human factors in AI development
>
>
> ...and it is *not* about...
>
>
> * the [implementation](https://ai.meta.stackexchange.com/q/1215/75) of machine learning, or
> * asking for a development tool or career path recommendation
>
>
> ...then you're in the right place to ask your question!
>
>
>
This is only a draft, but it seems like a good starting point. Please suggest improvements if you see anything that needs adjustment! Specifically, I'm not sure how specific we need to be about what constitutes "implementation" in this blurb. If there are other commonly asked kinds of off-topic questions, those could be worth mentioning too.
Upvotes: 3 [selected_answer]<issue_comment>username_2: We should drop any reference to implementation specifically being on or off topic. That's really orthogonal to the issue and it makes it too easy for people to justify arbitrarily closing good questions. And as this eliminates so many of the more concrete questions, it makes the site appear as though it's only for science-fiction'ish questions.
Upvotes: 3
|
2016/10/01
| 850
| 3,527
|
<issue_start>username_0: Many questions on AI seems to be trying to predict what might be possible in the future. This lend itself to science-fiction speculation (opinions). I think I am mostly provoked by this question:
[What jobs cannot be automatized by AI in the future?](https://ai.stackexchange.com/questions/2048/what-jobs-cannot-be-automatized-by-ai-in-the-future), which essentially wants us to make a prediction about a future scenario (specifically, what AI *can't* do). Predicting the future is hard, especially if there's no cut-off point (predicting what jobs are killed by AI in the year 2020 is much easier than predicting what jobs are killed by AI in 2100)...and it's not quite clear if there will be much expert opinion on futuristic predictions, or even *if* experts even are able to make good predictions about the future.
Questions about the future would only solicit personal opinions. I would strongly suggest that these types of questions be closed as opinion-based.<issue_comment>username_1: Such question usually tend to draw a lot of low quality answers which are speculating without giving any backup to their claims. And at the end it's just one person opinion on that topic.
Therefore if the question isn't going to generate any constructive answers, which doesn't have any reliable references or there are no existing research studies in that area (because the topic isn't great or too localized), and question is just asking people to speculate based on their gut instinct, we should vote to close.
Although this particular question about [automatic human jobs](https://ai.stackexchange.com/q/2048/8) isn't actually bad, since it's possible to assess such probability based on the available employment data and in [2013 Oxford study](http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf) they managed to estimate it using computer models. So I believe it's actually answerable.
Upvotes: 1 <issue_comment>username_2: Some questions about the future will fall squarely under the scope of AI. AI seems to be a sponge that attaches its salience to everything, from depression to eschatology. It's hard for us to say declaratively what parts of life AI will or won't impact.
But I agree that some questions have been inadequately specified. If a given question is so obtuse that we think we will lack sufficient evidence to determine an answer within a decade or two... would some criteria like that be sufficient reason to close the question?
On the other hand, sometimes it may be better to actually explain to a questioner why a particular question is apparently naive, since other people out there may be suffering the same prejudices or misconceptions.
Upvotes: 0 <issue_comment>username_3: We already close most of the more concrete questions, with some bullshit verbiage about how they're too "implementation" based. This only leaves room for the science-fiction style questions. If we start closing the science-fiction questions, there won't be anything left to do. Might as well close the site.
What we need to do is go back to what I suggested before - close the ***blatantly*** off-topic questions (eg, "How do I rebuild the carburetor on my 1973 Ford Pinto?") and obvious spam, and rely on the upvote/downvote mechanism for the grey-area stuff, and let the site evolve into what the users want it to become. The top-down, command-and-control model already isn't working and no amount of doubling-down on that is going to make it a good idea.
Upvotes: 2
|
2016/12/12
| 683
| 2,504
|
<issue_start>username_0: We seem to have a lot of questions about programming showing up now, which are off-topic (and not enough people VTCing!).
Examples: [(1)](https://ai.stackexchange.com/q/2457/145) [(2)](https://ai.stackexchange.com/q/2462/145) [(3)](https://ai.stackexchange.com/q/2451/145)
Is it possible to place a banner at the top of the page, stating that these questions are off-topic, such as the one on [Mi Yodeya](http://judaism.stackexchange.com)? Or is that only available for graduated sites?<issue_comment>username_1: Yes, I agree that this is a concerning trend. Though we [have an on-topic page](https://ai.meta.stackexchange.com/q/1252/75) that categorizes such questions as off-topic, there is not a direct link to that help center article on the asking form. [Relevant MSE.](https://meta.stackexchange.com/q/213935/295684)
Though we put together an on-topic page, the [tour page](https://ai.stackexchange.com/tour) was neglected. People are encouraged to take the tour when they first sign up; for some, it might be the only topicality-related document they read. Just now, I changed the "ask" and "don't ask" bulleted lists away from the default generic stuff to something that summarizes our help center guidelines. **Suggestions for improvements are welcome!** Hopefully this change will help our problem; if it doesn't, we can consider more conspicuous help text.
In regard to the examples you brought up (thank you for bringing specifics!):
1. This question was voluntarily removed by its author after receiving some comments about topicality.
2. This seems interesting to me; I think one could argue that it's asking about ways of thinking as opposed to asking for some code.
3. This is indeed a question about programming. It is [in the Close Votes queue](https://ai.stackexchange.com/review/close/1135) at the moment pending review. As you said, it would be very good to have more people reviewing. There are currently 16 non-moderator users with [the close/reopen vote privilege](https://ai.stackexchange.com/help/privileges/close-questions); I encourage all such users to [have a look at that queue](https://ai.stackexchange.com/review/close/).
Upvotes: 3 <issue_comment>username_2: There's nothing concerning about it. It's just the community speaking in regards to what they want to talk about. Let's quit trying to fight a rising tide and accept that AI is an inherently technical topic, and enthusiasts are going to want to ask technical questions.
Upvotes: 1
|
2016/12/13
| 471
| 1,687
|
<issue_start>username_0: I am trying to post a question in [the Ask Question form](https://ai.stackexchange.com/questions/ask), but it always shows "You can only post once every 40 minutes" even though it's my first question.
My question is **What are the artificial intelligence frameworks?**
May I ask this question?<issue_comment>username_1: This applies site-wide.
If you have asked a question *anywhere on the Stack Exchange network* in the past 40 minutes, you have to wait before asking a question on *any site*.
See this answer: <https://meta.stackoverflow.com/questions/322157/arent-new-users-throttled-asking-questions-anymore/322265#322265>
Upvotes: 2 <issue_comment>username_2: As mentioned by Mithrandir, this is a network-wide measure that applies to all users with less than 125 reputation. Source: [The Complete Rate-Limiting Guide](https://meta.stackexchange.com/a/164900/295684). It's designed to slow down spammers. Once the 40-minute window elapses, you'll be able to post another question anywhere on the network. I see that you have [already done so](https://ai.stackexchange.com/q/2471/75).
Please note that resource recommendations are off-topic here for two reasons. First, this site is for social and conceptual questions about artificial intelligence. Also (and this applies to most sites on Stack Exchange), collections of off-site resources tend to go out of date very quickly; it takes [a community effort](https://ai.meta.stackexchange.com/q/1267/75) to keep such a resource up to date. If the [scope of the site](https://ai.stackexchange.com/help/on-topic) is unclear, please bring up your concern here on meta so we can get it clarified.
Upvotes: 2
|
2017/01/06
| 1,287
| 5,123
|
<issue_start>username_0: Technical/mathematical/implementation questions are [off-topic](https://ai.meta.stackexchange.com/a/1199/4). However, many of them are not getting closed. E.g., here are the most recent close votes I cast on the grounds the questions were technical, but none of them got closed (e.g. see screenshot below).
Update 2017-01-19: the two answers written so far point out that technical questions may be on-topic in some cases. The issue I intended to raise in this question is that off-topic technical questions are not getting closed. E.g. in the screenshot below the vast majority of the technical questions are off-topic.
[](https://i.stack.imgur.com/YEQvk.png)<issue_comment>username_1: Good. There has never been any actual consensus that all "technical" questions are off-topic. And at the end of the day, the community decides what is on-topic, not a bunch of ivory-tower navel-gazers here on meta. Personally I like where we're at with this. There are some technical questions, yes, but quite often they're *different* technical questions than the ones you see on stats or datascience or whatever. That tells me we're providing real value to the world, and that makes me happy.
If anything, I say the only action we might need to ramp us, is migrating some questions to other \*.se sites, if they are clearly more suited for a different site (say, stats.se or datascience.se). I'm not entirely sure how migration works though.. can anybody nominate a question to be migrated, or what? Does that come in at a certain karma level, or is that something that only the StackExchange employees can do, or what?
Upvotes: 2 <issue_comment>username_2: I don't think it's not possible to force people to not ask the technical questions. Once it's asked, community decides whether it's on-topic or not. Closing only because it's a technical question isn't enough. More things needs to be taken into the account before deciding.
To be clear, this [proposal comes from the Science category](https://ai.meta.stackexchange.com/q/1141/8), so scientific questions are clearly on-topic (especially [socio-scientific angle](https://ai.meta.stackexchange.com/a/1144/8)), but some overlap in scope is expected.
Please note that there are over 10 sites across Stack Exchange network where Artificial Intelligence related questions can be also on-topic (such as [Cross Validated](https://stats.stackexchange.com/questions/tagged/artificial-intelligence), [Data Science](https://datascience.stackexchange.com/questions/tagged/machine-learning), [Computer Science](https://cs.stackexchange.com/questions/tagged/artificial-intelligence), [CSTheory](https://cstheory.stackexchange.com/questions/tagged/ai.artificial-intel), [Cognitive Sciences](https://cogsci.stackexchange.com/questions/tagged/artificial-intelligence), [Philosophy](https://philosophy.stackexchange.com/questions/tagged/artificial-intelligence), [Worldbuilding](https://worldbuilding.stackexchange.com/questions/tagged/artificial-intelligence), [Stack Overflow](https://stackoverflow.com/questions/tagged/artificial-intelligence), [History of Science](https://hsm.stackexchange.com/questions/tagged/artificial-intelligence), [Robotics](https://robotics.stackexchange.com/questions/tagged/artificial-intelligence), [GameDev](https://gamedev.stackexchange.com/questions/tagged/ai) and so on), so once the question is asked, it's a matter of speculation where it exactly should belong, unless it's very clear where it belongs. Otherwise claiming the ownership of some question related to AI on other non-AI site which has been asked specifically here or only because it's a technical one, it would be unwise. The point is, that this site is fully dedicated to AI, *Cross Validated* site has only few tags related to [AI](https://stats.stackexchange.com/questions/tagged/artificial-intelligence) and [machine learning](https://stats.stackexchange.com/questions/tagged/machine-learning) and it focuses only on statistical techniques where the questions asked there doesn't have to be related to AI.
Therefore if the question is asking about statistical techniques, then sure, it's more on-topic at [Cross Validated](https://stats.stackexchange.com/). Especially if you think it's off-topic here (e.g. nothing to do with AI), and on-topic there, vote to close, so after the closure it can be migrated by the moderators to another site. Similar with question specifically about [data science](https://datascience.stackexchange.com/) or [programming](https://stackoverflow.com/questions/tagged/artificial-intelligence).
In summary, the level of technicality is a matter of speculation. For me as far as it doesn't consist math, asking for formulas, technical implementation or modelling, programming code, it's not a technical question. We should rather ask ourselves, whether it's off-topic here (non-AI), and on-topic somewhere else.
Related discussion: [What should be on-topic, modelling or implementation, or anything else?](https://ai.meta.stackexchange.com/a/1235/8)
Upvotes: 2
|
2017/01/26
| 537
| 2,270
|
<issue_start>username_0: Since [this site is more subjective than some](https://ai.meta.stackexchange.com/q/1283/75), we occasionally get answers that are solely based on personal opinion or make claims with no justification/references. Even subjective questions should invite facts (instead of opinions), so such answers are less than ideal.
What should we do with these answers? Here are a few options (though feel free to propose alternatives not in this list):
* Just leave them alone and let them be downvoted/ignored
* Flag them for immediate deletion
* Flag them for the application of a post notice (e.g. "citation needed"), with deletion being the next course of action if the answer is not filled out
If you'd like some ideas on how this is handled on other sites, I refer you to [a Skeptics FAQ](https://skeptics.meta.stackexchange.com/q/1054).<issue_comment>username_1: Here's my proposal. I've tried to have this take into account our quality needs and also the good of the answer OP. This is essentially the same as your last idea.
1. Comment and ask for them to provide sources to back up their claims, or stick that moderator notice on.
This tells them that there's something wrong with how they're doing their answers, and gives them an opportunity to improve them.
2. If they update with the sources, then great - problem solved. If they refuse, or haven't after a period of time, then delete them - they're not reliable or good answers.
As to what the amount of time that we should give them, I don't know at the moment - people can provide suggestions.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I have some information that is not publically available based on research that I've been doing for the past several years. I can't cite it since it isn't published. Yet it is considerably more advanced and in agreement with observed evidence than theories that usually get mentioned like Integrated Information Theory or Global Workspace (both of which can be disproved). It won't be published until it is completed and no earlier than 2021. So, I can either withhold what I know (which would be quite odd considering that proton decay was talked about for years before it was disproved), or I can answer without citations.
Upvotes: 0
|
2017/07/25
| 396
| 1,450
|
<issue_start>username_0: I'm Pops, a Community Manager at Stack Exchange. Though it saddens me to say it, one of your moderators has decided it's time to step down. Fortunately for you, one of your fellow AI Stackers has answered the call to be your new pro tem mod:
[](https://ai.stackexchange.com/users/1671)
Please join me in thanking NietzscheanAI for their service and in welcoming DukeZhou!<issue_comment>username_1: I take on this responsibility with the assumption I a probably wasn't the first choice, and the awareness that I certainly can't fill NietzscheanAI's shoes.
That said, I'll do my best to fulfill my duties a *pro tem* mod *(emphasis on pro tem;)* taking my lead from the senior mods and our power-user experts, and will try to add value to the forum per my experience on the Humanities side of the AI equation.
Upvotes: 3 <issue_comment>username_2: Goodbye, @Niet! I voted for you on the original pro tem nominations, and I was sorry to hear that you were unhappy with what was considered on topic and decided to step down - I hope you decide to still be generally active, even without the diamond.
To @username_1: I've seen you around, here and over on Literature. You weren't active in the private beta, but that's excusable ;). I'm sure that you'll be able to take on your new duties and do them well. Thank you for volunteering for the position!
Upvotes: 2
|
2017/07/29
| 1,420
| 5,990
|
<issue_start>username_0: This is in relation to comments on: [Differentiable activation function](https://ai.stackexchange.com/q/2526/1671)
I made the point that the question seems to fit into the "conceptual aspects of AI" covered by this stack, but T.C. countered that Machine Learning questions, in particular, are already quite fractured across several sites.
How can we reconcile this so that the related Stacks support and add value to each other?
I personally would welcome guidance from trusted contributors and mods on the related Stacks.
---
As an analogy,there is a relationship between the Humanities Stacks Mythology, Literature, Latin and Philosophy (in addition to others such as History.). Different aspects of a single topic are best addressed in the forums where contributors have the relevant strengths. My point is these are subjects where a fuller understanding *requires* many fields.
I see this as one of the main strengths of Stack in an information explosion era with so many fields and subfields. Specifically that we can, and should, be walking "across the hall" to take advantage of the breadth of competencies Stack offers.
Part of my inclination may derive from having been in an interdisciplinary studies program as an undergraduate. In that program, we did not learn Science independently of History, Philosophy, Psychology, Art and Literature. Rather, these subjects were taught in tandem.<issue_comment>username_1: >
> I made the point that the question seems to fit into the "conceptual aspects of AI" covered by this stack, but T.C. countered that Machine Learning questions, in particular, are already quite fractured across several sites.
>
>
>
I believe most ML questions are on CV. Then DS got created, which has a huge overlap with CV, and a more trendy name. So one way to avoid fracture is not creating new Stacks with huge overlaps ([Are all questions asked on stats and data science SE also on topic here?](https://ai.meta.stackexchange.com/q/4/4)).
>
> How can we reconcile this so that the related Stacks support and add value to each other?
>
>
>
[Build and strengthen the Stack Exchange community with "crossover questions" between sites](https://meta.stackexchange.com/q/199989/178179)
>
> Part of my inclination may derive from having been in an interdisciplinary studies program as an undergraduate. In that program, we did not learn Science independently of History, Philosophy, Psychology, Art and Literature. Rather, these subjects were taught in tandem.
>
>
>
In practice, the development of AI models doesn't care much about History, Philosophy, Art and Literature. Most AI experts focus on the models, which tend to be statistical, therefore on-topic on CV.
Upvotes: 2 <issue_comment>username_2: The condition is related to the beta process definition and incentives built into the back end rules and user interface rules. These are created and maintained based on the analysis of trends and the projections of that analysis by the owners of the system upon which the domains stackexchange.com and stackoverflow.com sit.
Members, especially moderators and even more so diamond moderators, can mitigate the inevitable chaos that forms in any large account based network by choosing names and definitions that are likely to disambiguate options that users have. People can also request features and enhancements that may modify incentives in positive ways.
To meaningfully do any of these things it is important to understand that knowledge is segmented in some ways and homogeneous in other ways. Forcing questions into clean compartments is not even done at universities with curricula. In fact, trends toward interdisciplinary work are usually found at the most progressive universities and offered to the highest performing students.
The natural overlap of human discovery and achievement cannot be changed by any web site incentives system. Even the extreme measures of totalitarianism, jihad, or martial law are unlikely to bring about compartmentalized knowledge, mostly because smart people won't put up with it and will literally shoot back if pushed too far.
Artificial intelligence was born of interdisciplinary thinking, and is bound only by two things.
* It concerns primarily what can be artificially created
* It concerns primarily how to make choices that produce better results than arbitrary selection
Some may argue this. I won't because I've heard all the arguments otherwise, and they lack merit to the degree that further response is ... .
Regarding the current machine learning trend, it is primarily social and economic phenomenon that may or may not sustain. Recognizing that what goes up often comes down is another key to making choices today that we don't regret later.
At one time, stone work was a technology that bled into every topic. In 100 years, one may not be able to find the phrase, "Machine learning," in a recent piece of media. Perhaps nanotech-genetic portals might have become the craze, where people are id-based swallowed by their cars and homes instead of unlocking doors and keying alarm codes. Or not.
It could go the other way where people write ML algorithms that write poems instead of writing poems. People might go to art museums and plug their mind into the Salvador Dali machine and their friends might laugh at their Dali-ized creative thoughts seen on a 4 dimensional canvas.
In today's SE/SO reality, the best we can do is to consider naming and defining sites and tags based on a balance between currently common use of terms, the literal meaning of the words that comprise the term, and the overarching pattern of academics, publication, and terminology in those two places and on the web.
My gut feel is that excessive control will do the exact opposite of balance and push everyone with a brain away from the entire SE/SO engagement model and other sites with more incentive and less control will capture those emigrants.
Upvotes: 1
|
2017/08/09
| 1,371
| 5,385
|
<issue_start>username_0: This suggestion came from a comment on [another thread](https://ai.meta.stackexchange.com/q/1291), but I thought it was worthy of it's own meta question, so people can vote on and discuss it.
Here is the full comment:
>
> "**I came across the site, and expected to be able to ask questions about theory of the AIXI agent (for example), and was very disappointed to find that it was mostly focused on social issues. At the very least, it seems like the history of AI theory should be on-topic, and all of that is very technical.** There's kind of a chicken-and-egg problem here--the site can't be properly defined until it attracts enough experts, and it won't attract experts until there are interesting questions."
>
>
>
I left the second part of the comment in to illustrate how this connects to what might be seen as our #1 imperative: to attract experienced experts as contributors.<issue_comment>username_1: I definitely understand the concerns about the overlap between CrossValidated, Data Science, and this site. What we need to do, to help the site get more traction, is to define that boundary in a useful way. At a high level, it wouldn't make sense to reject a site about statistics because a perfectly good mathematics site already existed. Statistics has different goals, conventions, notation, and concerns--even though it's almost all mathematics.
I'd argue that the failures of the previous sites were more a question of timing than content. Serious interest in AI is on the horizon again, very recently, precisely because of advances in ML. That doesn't mean, however, that AI proper is the same thing as ML, or needs to be focused on implementation issues. There's a large amount of theory that isn't necessarily data science, either.
We went through some of the same growing pains on Signal Processing. The approach we took there (and I'm not saying it's the right approach for AI), was to concentrate mostly on theory, and avoid implementation details. It's something that didn't exist, and it gave us a way to attract experts who weren't programmers.
Explicitly making the history of AI on-topic, however technical, might be a good starting point to help clarify what a site dedicated to AI can add to the SE network. I'm not saying that it's necessarily off-topic now, but given that MathJax isn't even enabled yet, there's currently a strong bias toward strictly non-technical questions.
I think the [AIXI agent](https://en.wikipedia.org/wiki/AIXI) is a good example to begin discussing these issues. It's heavily mathematical, based on reinforcement learning, inspired by statistical reasoning (ala. [Solomonoff's Universal Prior](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference)), and uses non-computable concepts (i.e. [Kolomogorov Complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity)). So, there's a potential overlap with any number of fields, but really it's proper [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence). It's a much more practical definition of intelligence than, say, the Turing Test--precisely because it's defined mathematically. At the very least, it seems like definitions of intelligence should be on-topic, and we need math for those.
It might warrant a completely separate meta question, but I'll offer one thought on how to help clarify the scope of the site (in addition to including AI history). Let's start with [Peter Norvig's](https://en.wikipedia.org/wiki/Peter_Norvig) definition of AI (from [<NAME>'s](https://ai.meta.stackexchange.com/users/4/franck-dernoncourt) [slides](http://www.francky.me/doc/20120530%20-%20AI%20and%20Business%20-%20CCSF%20Paris.pdf)):
>
> We think of AI as understanding the world and deciding how to make
> good decisions. Dealing with uncertainty but still being able to
> make good decisions is what separates AI from the rest of
> computer science.
>
>
>
Any discussion of decision making under uncertainty will almost necessarily involve probability and statistics. However, the challenges involved in *automating* those decisions effectively, in my opinion, are the domain of Artificial Intelligence, whether general or specialized. That definition also includes all of the potential social issues.
Upvotes: 1 <issue_comment>username_2: I was under the impression that history and theory were already on-topic. Social issues is one new topic we bring to the SE table, but academic questions (about AI as a discipline/science) are also ours to present. Key quote from a community manager [in the Area 51 Discussion Zone](https://area51.meta.stackexchange.com/a/24016/136466), emphasis original:
>
> Notice that this proposal is in the 'Science' category; *not* 'Technology'. Despite the creation of a Data Science site to cover this topic, the community made a sufficiently compelling case that there is a swath of **questions in the academic humanities arena** that are not covered by our current sites.
>
>
>
I realize now that when [drafting](https://ai.meta.stackexchange.com/q/1252/75) the [on-topic page](https://ai.stackexchange.com/help/on-topic) I forgot to include a bullet point to cover these questions. I apologize for the oversight and have corrected it. As always, suggestions for improvement to that page's contents are welcome!
Upvotes: 3 [selected_answer]
|
2017/08/24
| 1,290
| 5,049
|
<issue_start>username_0: We currently have both [cnn](https://ai.stackexchange.com/questions/tagged/cnn "show questions tagged 'cnn'") and [convolutional-neural-networks](https://ai.stackexchange.com/questions/tagged/convolutional-neural-networks "show questions tagged 'convolutional-neural-networks'").
Should [cnn](https://ai.stackexchange.com/questions/tagged/cnn "show questions tagged 'cnn'") be a synonym of [convolutional-neural-networks](https://ai.stackexchange.com/questions/tagged/convolutional-neural-networks "show questions tagged 'convolutional-neural-networks'")?<issue_comment>username_1: I definitely understand the concerns about the overlap between CrossValidated, Data Science, and this site. What we need to do, to help the site get more traction, is to define that boundary in a useful way. At a high level, it wouldn't make sense to reject a site about statistics because a perfectly good mathematics site already existed. Statistics has different goals, conventions, notation, and concerns--even though it's almost all mathematics.
I'd argue that the failures of the previous sites were more a question of timing than content. Serious interest in AI is on the horizon again, very recently, precisely because of advances in ML. That doesn't mean, however, that AI proper is the same thing as ML, or needs to be focused on implementation issues. There's a large amount of theory that isn't necessarily data science, either.
We went through some of the same growing pains on Signal Processing. The approach we took there (and I'm not saying it's the right approach for AI), was to concentrate mostly on theory, and avoid implementation details. It's something that didn't exist, and it gave us a way to attract experts who weren't programmers.
Explicitly making the history of AI on-topic, however technical, might be a good starting point to help clarify what a site dedicated to AI can add to the SE network. I'm not saying that it's necessarily off-topic now, but given that MathJax isn't even enabled yet, there's currently a strong bias toward strictly non-technical questions.
I think the [AIXI agent](https://en.wikipedia.org/wiki/AIXI) is a good example to begin discussing these issues. It's heavily mathematical, based on reinforcement learning, inspired by statistical reasoning (ala. [Solomonoff's Universal Prior](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference)), and uses non-computable concepts (i.e. [Kolomogorov Complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity)). So, there's a potential overlap with any number of fields, but really it's proper [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence). It's a much more practical definition of intelligence than, say, the Turing Test--precisely because it's defined mathematically. At the very least, it seems like definitions of intelligence should be on-topic, and we need math for those.
It might warrant a completely separate meta question, but I'll offer one thought on how to help clarify the scope of the site (in addition to including AI history). Let's start with [<NAME>'s](https://en.wikipedia.org/wiki/Peter_Norvig) definition of AI (from [<NAME>'s](https://ai.meta.stackexchange.com/users/4/franck-dernoncourt) [slides](http://www.francky.me/doc/20120530%20-%20AI%20and%20Business%20-%20CCSF%20Paris.pdf)):
>
> We think of AI as understanding the world and deciding how to make
> good decisions. Dealing with uncertainty but still being able to
> make good decisions is what separates AI from the rest of
> computer science.
>
>
>
Any discussion of decision making under uncertainty will almost necessarily involve probability and statistics. However, the challenges involved in *automating* those decisions effectively, in my opinion, are the domain of Artificial Intelligence, whether general or specialized. That definition also includes all of the potential social issues.
Upvotes: 1 <issue_comment>username_2: I was under the impression that history and theory were already on-topic. Social issues is one new topic we bring to the SE table, but academic questions (about AI as a discipline/science) are also ours to present. Key quote from a community manager [in the Area 51 Discussion Zone](https://area51.meta.stackexchange.com/a/24016/136466), emphasis original:
>
> Notice that this proposal is in the 'Science' category; *not* 'Technology'. Despite the creation of a Data Science site to cover this topic, the community made a sufficiently compelling case that there is a swath of **questions in the academic humanities arena** that are not covered by our current sites.
>
>
>
I realize now that when [drafting](https://ai.meta.stackexchange.com/q/1252/75) the [on-topic page](https://ai.stackexchange.com/help/on-topic) I forgot to include a bullet point to cover these questions. I apologize for the oversight and have corrected it. As always, suggestions for improvement to that page's contents are welcome!
Upvotes: 3 [selected_answer]
|
2017/10/01
| 1,120
| 4,382
|
<issue_start>username_0: I've been active on several different SE sites during the last years and I haven't seen any other community that's so fast with downvoting questions, especially without providing helpful comments.
I am all for strict rules and enforcing high quality content. But with a small site like AI that still [needs to polish its numbers](https://area51.stackexchange.com/proposals/93481/artificial-intelligence) after over 400 days in beta we should be careful not to go over the top.
In case a question is definitely off-topic or not salvageable quality wise, it needs to be treated accordingly and it should be closed. But when I come here I am often greeted by several new questions with just a few views but the first downvotes already. No explanations are given and (the often new) visitors are left with a bad feeling and no idea what they did wrong. When I go over their questions it is sometimes difficult for me to understand why they have been downvoted. I don't feel like that's the right approach to grow the site and attract new members.
Am I on the right track or do you disagree? Is a strict (and maybe a little hostile) environment necessary to keep the quality high, at the cost of losing potential members who might create valuable content if we give them feedback and time to get accustomed to our community?<issue_comment>username_1: It is indeed a shame when a user comes in, asks a decent question, and gets silently downvoted. Even though the downvote mechanism itself isn't hostile - we vote on content, not people - people will feel frustrated when their posts receive negative feedback for reasons unclear. At the same time, downvotes are critical to quality control and we cannot control users' voting behavior (except in abusive situations like targeted voting).
Fortunately, even though we might not get an explanation from downvoters themselves, we can help new users understand what's going on. The [First Posts](https://ai.stackexchange.com/review/first-posts) review queue gives you the chance to provide users' first experience on our site. You can also monitor [a list of new downvoted questions](https://ai.stackexchange.com/search?tab=newest&q=score%3a..-1%20closed%3a0) to check that the downvotes are justified and take all appropriate actions. Specifically, it's very helpful to edit and comment with a welcome and an explanation of how your adjustment will help their post's reception.
Side note for what it's worth: the Area 51 statistics are no longer as critical as their central position advertises them. The Area 51 system [is pretty old and pending a reworking](https://meta.stackexchange.com/a/263506/295684), even though it still gets the job done. The comments on [this MSE answer](https://meta.stackexchange.com/a/257720/295684) are relevant, especially [this one](https://meta.stackexchange.com/questions/257614/graduation-site-closure-and-a-clearer-outlook-on-the-health-of-se-sites/257639#comment840478_257720) (excerpt: "The A51 metrics are spectacularly ill-suited for giving an accurate picture of a site's overall health") and [this other one](https://meta.stackexchange.com/questions/257614/graduation-site-closure-and-a-clearer-outlook-on-the-health-of-se-sites/257639#comment841019_257720). It would still be nice to have higher stats, though.
In summary, quality control and welcomingness needn't be mutually exclusive. If we guide users and help adjust their posts, we can be inviting and high-quality at the same time!
Upvotes: 4 [selected_answer]<issue_comment>username_2: This could easily be solved by requiring a comment for downvotes on new stacks or on new user questions.
Upvotes: 1 <issue_comment>username_3: Well, I just got downvoted because I tried to make sense of a question that was somewhat unclear. And the person doing it left a comment saying that an answer to a bad question was still a wrong answer. This is exactly the kind of attitude that makes people leave this site.
There are a lot of questions from people who haven't got a clue about AI, and often express themselves not very clearly, as English is obviously not their first language. I am really taken aback by how unfriendly the community on here is, as most of these questions immediately get downvoted.
I don't know what the solution is, as even requiring a comment is not really solving this issue.
Upvotes: 1
|
2018/02/01
| 1,309
| 5,636
|
<issue_start>username_0: According to my scope of the subjects. Artificial intelligence in a very strict sense should only contain questions pertaining to how we can create truly intelligent (creative, aware, etc.) machines. Whereas data science is directly the manipulation of data in order to produce tools to make something better (image detection, intrusion detection, etc.). The separation is very blurry and I do not think that intelligence can exist without information/data, and its manipulations. However, most machine learning and deep learning demonstrated are simply conducting complex function approximations.
However, due to the popularization of machine learning and especially deep learning, the manipulation of data has created the impression of intelligent machine due to it being capable of competing with human performance in very targeted tasks (object recognition, segmentation, etc.). Evidently, the name artificial intelligence catches people's attention much more than data science or machine learning. Thus, it is common for news which is essentially generic data science/machine learning to be called artificial intelligence.
This is further demonstrated on the Artificial Intelligence site where the majority of the questions are pertaining more to data science and machine learning than truly discussing possibilities, methods or emerging work pertaining to machines capable of intelligence.
This niche can easily be encapsulated into a site that combines both Artificial Intelligence and Data Science.
Post on Data Science is found [here](https://datascience.meta.stackexchange.com/questions/2352/should-this-site-be-combined-with-the-artificial-intelligence-stack-exchange/).<issue_comment>username_1: Though we get a lot of (off-topic) questions about data manipulation and implementation issues in general, this site was created to serve questions that aren't so quantitative. For some more info on our scope, see the [help center](https://ai.stackexchange.com/help/on-topic). Admittedly, we are currently doing an incomplete job of making the scope clear to new users and handling off-topic questions. Nevertheless, it is clear from [this Area 51 Discussion Zone post](https://area51.meta.stackexchange.com/a/24016/136466) by a Stack Exchange community manager that this site is for AI as a science, not as a technology to be implemented:
>
> Notice that this proposal is in the 'Science' category; *not* 'Technology'. Despite the creation of a Data Science site to cover this topic, the community made a sufficiently compelling case that there is a swath of **questions in the academic humanities arena** that are *not* covered by our current sites.
>
>
>
Social, conceptual, and philosophical aspects of AI are on-topic here, but not on Data Science, which is a more technical site. There is some overlap in architectural questions, but there is much precedent for sites' topics not being fully disjoint — Stack Overflow and Super User on PowerShell questions, for example. Combining our slightly subjective questions with Data Science's technicality would be mixing two different types of questions (and two different communities).
In short, this site and Data Science are looking at different aspects of artificial intelligence. Both sites are valuable, each with its own knowledgeable people, and it would be good to preserve the distinction.
Relevant MSE: [Can Stack Exchange follow a more generic approach?](https://meta.stackexchange.com/q/68214/295684)
Upvotes: 2 <issue_comment>username_2: My first opinion has been "nooo!". However, skipping points as if this site must include applied AI or only strong AI, what is AI and what is data processing, ... my opinion is now "yes, just find a good name for the combined site".
The reason: this site has a low volume of questions and answers (if all off-topic was directly closed, activity will be epsilon), low number views per day, and, sorry to say that, low quality of questions and answers. Join two sites will increase the activity and the number of experts, improving all these aspects.
A good usage of tags in the new site will solve all practical aspects.
Upvotes: -1 <issue_comment>username_3: Hellz No! Where would people ask philosophical questions related to AI, or discuss theoretical topics? What about the Mythology of AI? (Off-topic at Stack:Mythology, but is the predominant influence re the public's perception of AI.)
**Morality of AI applications is a critical issue, only increasing, as are social impacts of AI. This Stack is the forum to discuss them.**
**This is also the Stack for Game Theory as it relates to AI, and combinatorial games, which are inextricably related to AI, in that they are still used for AI proving.**
I'd propose, as the Cross Validated community has, that Data Science should probably be rolled into that Stack, and CV should probably adopt the name "Data Science" so people know where to go for those questions. (i.e. "CV" is cool, but it's insider-ey, and noobs don't know that's the place to ask Data Science questions related to AI, and come to SE:AI.)
Do I don't think the problem is with the AI Stack at all. The humanities size of the equation, which is the core of this stack, should not be handled on a Data Science forum.
I think the solution would be to revise our "Community Guidelines" to be very clear about which questions should go to CV/DS, reposting their guidelines as a sub-section to AI's guidelines, and try to get some involvement from the CV/DS forums on which questions to migrate, so we don't accidentally migrate questions they don't want.
Upvotes: 0
|
2018/03/01
| 552
| 2,088
|
<issue_start>username_0: My personal position is we should indeed take basic questions on any aspect of AI, so I'm not suggesting we end our experiment accepting implementation questions, but, where these questions haven't attracted an answer on AI, should we migrate to SE:Data Science or SE:Cross Validated?<issue_comment>username_1: No.
===
If a question is on topic, then it should stay here. Migrating is for *high-quality, but off topic* questions. This is why migrating a question involves closing as "off-topic". In this case, they're *not* off topic - they just haven't gotten an answer.
Now, *why* don't they have an answer? Probably because there's nobody on the site who knows how to answer it... or the right person just hasn't seen it. If nobody on the site knows how to answer a question, then the best thing to do would be to **attract users who *do* know how to answer the questions**, namely, "experts".
How do these "experts" find the site, though? Usually through *content already on the site* - it'll come up in a Google search or something. So to attract the experts, you need content, and if you send all the content away, then AI.SE won't get new users and the site will stagnate.
There's nothing wrong with having some unanswered questions around, as long as not *all* questions are unanswered. And if that happens, the site's got a big problem.
See also [Meta.SE guidance on migrations](https://meta.stackexchange.com/a/212271/294691).
Upvotes: 2 <issue_comment>username_2: [Questions older than 60 days cannot be migrated](https://meta.stackexchange.com/q/151890/295684), even by moderators. I don't think we can expect questions to always be answered in two months — from my experience on Super User, it's not at all unusual for months to pass before the right expert stumbles upon the question and solves it. We therefore have a bit of a catch-22 here. Even if we could ship out old unanswered questions, that's probably less than ideal for site growth; see [Mithrandir's answer](https://ai.meta.stackexchange.com/a/1327/75) for more on that.
Upvotes: 1
|
2018/03/12
| 577
| 2,143
|
<issue_start>username_0: This might seem a bit opinionated, but since I joined AI.SE I have seen a lack of biological questions on this site. Neurobiology was one of the main influence of AI, but I don't see questions on the same. Questions on topics like brain, neurons, swarm-intelligence, etc. What can be done to explore the biological side of AI on this site?<issue_comment>username_1: No.
===
If a question is on topic, then it should stay here. Migrating is for *high-quality, but off topic* questions. This is why migrating a question involves closing as "off-topic". In this case, they're *not* off topic - they just haven't gotten an answer.
Now, *why* don't they have an answer? Probably because there's nobody on the site who knows how to answer it... or the right person just hasn't seen it. If nobody on the site knows how to answer a question, then the best thing to do would be to **attract users who *do* know how to answer the questions**, namely, "experts".
How do these "experts" find the site, though? Usually through *content already on the site* - it'll come up in a Google search or something. So to attract the experts, you need content, and if you send all the content away, then AI.SE won't get new users and the site will stagnate.
There's nothing wrong with having some unanswered questions around, as long as not *all* questions are unanswered. And if that happens, the site's got a big problem.
See also [Meta.SE guidance on migrations](https://meta.stackexchange.com/a/212271/294691).
Upvotes: 2 <issue_comment>username_2: [Questions older than 60 days cannot be migrated](https://meta.stackexchange.com/q/151890/295684), even by moderators. I don't think we can expect questions to always be answered in two months — from my experience on Super User, it's not at all unusual for months to pass before the right expert stumbles upon the question and solves it. We therefore have a bit of a catch-22 here. Even if we could ship out old unanswered questions, that's probably less than ideal for site growth; see [Mithrandir's answer](https://ai.meta.stackexchange.com/a/1327/75) for more on that.
Upvotes: 1
|
2018/04/13
| 543
| 2,163
|
<issue_start>username_0: So I don't know why but suddenly there is a spurt of "Close Vote" revies in my queue. It probably has something to do with the mass editing of posts by @Pheo. What exactly is the community policy for this matter? Because most of the question are pretty highly up-voted like:
[Deducing features from the data-set](https://ai.stackexchange.com/questions/3965/deducing-the-features-from-the-data-set)
[Computing resources needed for Reinforcement Learning/Machine Imagery](https://ai.stackexchange.com/questions/5981/computing-resources-needed-for-reinforcement-learning-machine-imagery)
I want to know what is the suitable action in few of these example questions. Should the mods do something?<issue_comment>username_1: Edits don't cause things to head to the close vote queue. Edits on a closed question will sometimes push a question into the *reopen* queue, but never the close vote queue.
So for some reason, someone must have gone through and manually cast close votes/flags, pushing it into the queue.
Votes shouldn't affect the action you take on the post - if it was highly voted but then determined to be off topic, close it. If it's not close-worthy, leave it open. Each question should be judged on its own - if the standing policy on a specific type of question is that it should be closed, review it and pick the appropriate close reason. If you don't, pick Leave Open.
Unless someone is flooding the Close Votes queue with a mass of on topic questions, I don't think that any moderator action is necessary. There's nothing wrong with going through old questions that should be closed (although some would consider it a waste of time).
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is definitely an uncommon surge in close votes in the queue at present. (Typically we see serial downvoting, but no so many close vote.)
It's useful information, in the sense of getting user opinions re: what's in scope, but we tend not to actually close unless the question is egregiously off-topic, unsuitable, or unsalvagable.
It's possible it is due to all the edits, bringing buried questions to light...
Upvotes: 1
|
2018/04/15
| 406
| 1,721
|
<issue_start>username_0: I am not asking this question to criticize or nullify someone's effort. I have been noticing that since the last few days, when @Pheo started editing (which is a good thing) that new questions which are being asked by "new" users are getting minimum views.
My question is should the "moderators" accept so many edits at a single time so that new questions get buried among the old questions? The old questions have a clear advantage in terms of reputation of the OP, views, upvotes, and general title. So what does the moderators think is a solution to the problem?<issue_comment>username_1: I think I have found a solution to this.
Until I attain edit privileges, I am only going to do a few posts a week - Making sure that everything on them is as it should be (Taking critical issues into consideration first). As much as this is going to slow the progress down, the quality will go up greatly. As a plus, I will have more practice and a lot more time for feedback per capita per post.
I hope the majority of you stand with me, but regardless, I am going to do this. Please note, I have not drawn into the shell of seclusion, but merely been scolded and found a corner for myself to sit in for a while.
### TL;DR
I am going to be cutting back on the number of time per post and increasing the time spent on each post. In other words, keeping the impact down.
Upvotes: 3 [selected_answer]<issue_comment>username_2: For years, users have asked Stack Exchange to add an option not to bump a question/answer when it gets edited. Until this gets implemented, there will be some awkward balance between keeping imperfect content that could be edited and not burying new questions.
Upvotes: 2
|
2018/05/11
| 2,219
| 9,830
|
<issue_start>username_0: We've been informally allowing ML implementation questions, software & hardware evaluation questions, and the scope of the humanities side of the field has expanded also...
My sense is, the expansion of scope has been helpful and, in aggregate, welcomed.
* We're the general AI site, so I feel like pretty much anything we have a tag for, when it's related to AI, is within scope
For example:
**terminology** should *definitely* be on-topic and mentioned
**hardware evaluation** and **software evaluation** (libraries, frameworks, etc.) questions can be answered objectively and provide valuable information
**game theory** and extensions I'd personally like to see mentioned
**logic** seems to me to be fundamental, as does **probability**
The caveat is that we do want to work in conjunction with the communities with which we have overlap, and support those communities.
We feel firmly that there needs to be a Stack:AI, but we're still in the process of figuring out how to make that permanent, and so we also depend on the support of these related communities.
-----------------------
-----------------------
**Because we also deal with the humanities, I'd want to have an explanation of what constitutes a good "soft question".**
These are cases where there is not an objective answer, but answers that are sufficiently supported, ideally with citations, are legitimate.
These types of questions are a great opportunity to introduce OP's to fundamental concepts.<issue_comment>username_1: This site is about Artificial Intelligence (AI) which generalizes Machine Learning and Deep Learning:
[](https://i.stack.imgur.com/1scCQ.png)
Hence, I think, the site should embrace and be the home questions about any of those. Both practical and theoretical, science and engineering.
In order to do so and bring this great audience we should:
1. Change the name of the community into **Artificial Intelligence and Machine Learning**.
2. Write explicitly in the site description that it deals with those subjects and welcome questions about them.
Doing so, I believe, will fill the void in the SE communities which doesn't dedicate any community to gather people which are experts on those.
**Remark**
Image taken from the book [Francois Chollet - Deep Learning with Python](https://rads.stackoverflow.com/amzn/click/1617294438).
Upvotes: 2 <issue_comment>username_2: Similar questions come back on Meta, but no convergence.
I am a proponent of technical questions since before the exchange creation. The hairy issue is to clearly define the boundary.
Any kind of technical question will lead to an overflow of simple programming questions on how to do something with Tensorflow or Pytorch. Such questions are (in my opinion) better answered on StackOverflow. These frameworks are still complex enough so as many questions are really about syntax and framework-specific understanding (e.g. I concieve it is hard to use TensorFlow if you have never used graphs or data flows).
Technical questions like "how many layers to do something?", "what architecture is best for mushroom recognition?", or "why SVM here and ANN there?" seem fine to me.
All in all, I expect the community manages to still attract questions about consciousness, AGI, ethics, etc. A tsunami of small technical questions is good for traffic, but causes a low signal/noise ratio.
Upvotes: 2 <issue_comment>username_3: The Data Science site already covers such topics. It is in my opinion that the Artificial Intelligence site and the Data Science site should be merged where the scope would include
* The humanities of artificial intelligence (ethics, morality, etc.)
* The humanities of data collection and privacy (ethics, morality, etc.)
* The discussion of state-of-the art research in the field of artificial intelligence, machine learning and data science.
* Questions pertaining to the implementation of techniques and methods that can be used to achieve artificial intelligence (there are very few of these).
* Questions pertaining to the implementation of machine learning techniques and methods (Bayesian models, trees, neural networks, deep learning, etc.).
---
A site which combines both Artificial Intelligence and Data Science would have many **benefits**:
* A wider audience of potential answerers such that individuals may have a higher probability of find resolutions to their queries. For example a deep learning question asked on either of the sites only, will not reach as many answerers, this hurts the questioner's chances of getting the best possible answer.
* The possibility of people with a strong implementation background whom are more likely to peruse Data Science, to also be involved in discussions regarding the ethics and morality of artificial intelligence.
* The possibility of those more interested in the humanities of artificial intelligence to see the kinds of problems that machine learning algorithms are capable of solving and forging stronger arguments about the ethical use and morality of artificial intelligence.
---
In my opinion, artificial intelligence does not yet exist, very fancy computational models which are essentially hyper-plane separators are not intelligent. However, due to the misnomer used in the medias for machine learning, artificial intelligence is used to describe these techniques.
As a result, many questions on the Artificial Intelligence site do not match the intended guidelines of the site. Most questions on any particular day do not belong on this site and should be migrated to Data Science. I propose the sites be merged into a single site.
I really do like the questions asked on the Artificial Intelligence site and I would love to partake in them. However reading through Stack Overflow and Data Science usually occupies most of the time I want to spend on my couch. Furthermore, I often see questions in Artificial Intelligence that are almost mirrors of those that have already been answered in great lengths in Data Science. Specifically those relating to neural networks, backpropagtion or gradient descent.
I would ask kindly for the mods of this site to consider that in unity we are all stronger, in division we fall.
Upvotes: 0 <issue_comment>username_4: I'd personally like to expand the guidelines to formally include:
* a specific **AI** programming problem, or
* an **AI** software algorithm, or
* **AI** software tools commonly used by programmers; and is
* a practical, answerable problem that is unique to **AI** software development
which is basically Overflow with "AI" added to each line.
**WHY?**
My main competency is in the humanities side of the AI equation, but I don't think it's possible we'd going to be able to sustain the level of activity to graduate from Beta on philosophical and conceptual questions alone. And, I'm inclined to believe that AI is a field where the humanities and sciences intersect.
When I first came on as mod, there was a flood of Python question related to AI development. It seemed clear that these endeavors constitute a relatively new sub-field. So while I'd point someone with a general Python question to Overflow of Computer Science, if that question relates to AI, I think it belongs here. That's just one example.
Upvotes: 1 <issue_comment>username_4: I preemptively modified the guidelines just now to make it clear that **reference requests** are on-topic. (We have a tag for it, and reference requests have utility and traffic-drawing value.) The idea is that experienced contributors can suggest reference materials with some vetting and, ideally, context and synopsis.
We also have **software evaluation** and **hardware evaluation** tags, and I'd like to add these officially as well because here there can be a great deal of objectivity. (i.e. processor performance can be precisely quantified, and functions related to AI development explained. Likewise, with software utilities, functions and capabilities can be accurately listed and broken down.)
**AI Career Advice**
I strongly feel this should be on-topic. While it's typically the type of thing one undertakes on chat, most chat participation is low, and good luck finding someone who can give you advice in any given span. But AI has never been more burgeoning as a field, with opportunities for the average programmer in addition to PhD's. A lot of people want to get into the field, and advice from professionals and scholars would be salient, beneficial, and potentially boost activity/engagement with answerable questions.
Upvotes: 1 <issue_comment>username_5: I don't see a discussion of what constitutes a good "Soft Question", but @username_4's suggestions make sense:
* Questions should be rooted in existing AI research, or research by serious philosophers on AI related topics, not in popular non-fiction books. (i.e. favour <NAME> or <NAME> over <NAME>).
+ Rationale: Popular non-fiction tends to exaggerate AI's capabilities, and tends to be written by people with little actual knowledge of the field, despite reaching a broad audience. Questions rooted in this material will tend to elicit wildly speculative answers, or to be unanswerable.
* Soft questions and their answers should include supporting citations to scholarly works, and should be rooted in empirically supported facts whenever possible.
+ Rationale: A good example was a recent question on automation. It's easy to speculate, but there's actually lots of good data, both about what financial markets think will be automated, and what AI experts as a whole think can be automated. These estimates are likely to be far more reliable than an individual user's opinions, or even any philosopher's opinions.
Upvotes: 1
|
2018/07/14
| 1,551
| 6,421
|
<issue_start>username_0: I've been active on SE from some time and only recently entered in ai.SE community. In very large communities like SO questions about resources are not welcomed and usually are closed after a short while. In smallest or not very large communities are usually welcomed since they could be helpful to beginners.
So, are this type of questions welcomed here?
For the sake of the question, I've already posted a [question](https://ai.stackexchange.com/questions/7146/comprehensive-list-of-moocs-and-books-on-reinforcement-learning) but then the doubt comes up and I thought that it was better to ask instead of closing in advance my question. The post itself isn't opinion based or too broad but, as I said, not all communities welcome list type question.
---
Just to be clear, what I means for resources isn't links to external sites that could easily expire. I mean books, articles and so on. Of course links to external resources like tensorflow/keras/caffe/etc. manuals, tutorials or documentation are welcomed.<issue_comment>username_1: The problems with generic external resource requests don't really change due to the size of the site.
* Links can fail, or go out of date. An answer that is mostly links could degrade so that it is not usable, unless it was actively maintained. This is also why link-only answers are discouraged.
* A "correct" answer is hard to assess.
* There is a strong element of opinion on what to include or exclude when compiling "comprehensive" lists. There is an implied "and the list should be reviewed for relevance and curated" which is hard to objectify, but if it wasn't present then clearly just Googling e.g. "Reinforcement Learning tutorials and MOOCs" would be enough for the OP.
* No-one will actually read or use a comprehensive list of introductory material. It becomes like a restaurant menu where a reader has to attempt to pick out the 2 or 3 items from the answer that would be most useful to them.
* I don't think that technical avoidance of actual hyperlinks, and use of ISBNs, course codes etc changes the nature of this at all. *Some* external references have a long shelf life. E.g. "Origin of Species" is still relevant today. But this does not apply to all books, just because they are books.
>
> Just to be clear, what I means for resources isn't links to external sites that could easily expire.
>
>
>
Perhaps if you made it clear what the nature of these non-link resources would look like in an answer, it could help move it out of being a request for generic resources, and become a more focused question. E.g. "What are the must have introductory books in *subject area*, and what prior knowledge do they assume?" is a lot more focused than "I'm looking for a comprehensive list of MOOCs, books, tutorial and good resources" which is essentially asking for anything and everything that *might* be useful, without bounds.
Upvotes: 2 <issue_comment>username_2: After a week and after reading your comments and answers I still think that ai.SE could benefit from resource request questions. However, I think that my original question on main ai.SE is badly posed.
Some reasons why ai.SE could benefit from resource request questions are:
* They attract visitors. This type of questions have usually a lot of views and could attract new visitors from web search engines.
* They could prevent some users to post dumb questions. This could be only a personal thought but one of the main reasons why I choose to join this community is that, as a self-thought beginner, I don't know which sources I should consider trustworthy.
* They could be helpful even for non-beginners. Even at the semi-professional level, one could find new resources interesting.
* They could condense a lot of similar questions that ask for resources about some topic.
Of course, there are some disadvantages to this type of question. One of them is how to choose which answer should be accepted. I think that we could have one answer for each resource suggested and one accepted community answer that keeps track of the top resources linked. The community answer could be edit by anyone that has a certain reputation.
Upvotes: 0 <issue_comment>username_3: When I think about this question in the context of research papers, for instance, I can't see a real issue.
Ideally, when posting research papers links, the title of the paper will be used in addition to the link, so if the link goes bad, people can still search for the paper.
<NAME>'s *[Artificial Intelligence: A Modern Approach](https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach)* is heavily cited on SE:AI, and the text was originally published in 1995. The book is in its 3rd edition now, (which is not always noted when cited,) but even the 3rd edition dates from 2009, earlier than the recent Machine Learning milestones (~2016) yet the textbook is still relevant and heavily utilized.
List questions do have some issues (see [username_1's answer](https://ai.meta.stackexchange.com/a/1371/1671)) and seem to be off-topic in general across Stack exchange.
However, I'd still think lists of research papers on a given topic, ideally peer-reviewed, would provide utility and carry archival value. In the same way, lists of well-regarded textbooks could be useful.
---
Second Consideration: Contemporary Hacker Culture and Youtube
In some sense we're the "General AI" site, covering the full scope of the field, as opposed to focusing on any given specific aspect (distinct from stacks like Data Science.)
We seem to be the stack where beginners typically come to first. I created a [getting-started](https://ai.stackexchange.com/questions/tagged/getting-started) tag because there are so many of these questions.
Many people today are learning the basics today via youtube videos. Where the videos are solid, they seem to provide benefit, but they tend to be more ephemeral, especially when they come from non-academic sources. (<NAME>'s lectures on [Time Complexity](https://www.youtube.com/watch?v=moPtwq_cVH8) will likely be available for a very long time indeed, where a random youtuber using click-baitey titles subject matter to generate ad-revenues may not be.)
My feeling is, re: videos, is that anything commercial should be avoided, but anything coming from accredited academic institutions is reliable and suitable.
Upvotes: 2
|
2018/08/01
| 2,310
| 10,011
|
<issue_start>username_0: Because of a comment added to someone else's question about it being off topic, I looked up the definition of what is on topic for the AI beta.1
I was startled to find that the current Artificial Intelligence SE site description is this.
>
> Q&A for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment
>
>
>
What I would have guessed it to be based on actual Q&A content is this.
>
> Q&A for people interested in conceptual, mathematical, design, and approach questions related to the creation, use, and cultural impact of artificial intelligence and cybernetics
>
>
>
Below are reasons why that second phrase is more descriptive of what is actually discussed and accepted as Q&A. The evidence for the below reasons is clearly evident not only in the titles and bodies of the most popular questions and answers but also in tag usage, the top ten being these.
* Neural networks
* Machine learning
* Deep learning
* CNNs
* Reinforcement
* AI design
* Image recognition
* Algorithm
* Classification
* Training
Less than 1% of the content is about life and challenges in world where cognitive automation is emerging. It is likely that many of our members couldn't distinguish a cognitive function from either first order predicate logic or learned routine. The later two can be simulated in many respects with current digital systems, but it is unclear whether, in this century, cognition will be simulated. I know that's not the popular conception, but those with the greater experience that have been watching AI and working in the field for decades know it to be the truth.
Here are my reasons why I think the current AI meta site definition is not consistent with either expectation, actual content, or the real interests of the membership.
* A large proportion of the most active and interesting questions are about what can and has been done in the area of machine learning.
* Robotics appears and should appear in the content for the AI beta. To begin with, the field of AI was as much born out of the need for control systems faster than humans to defend against attacking aircraft and intercontinental ballistic missiles than as a way to do proofs using formal logic. More importantly, the long awaited for emergence of autonomous vehicles, automated vacuum cleaners, and a host of other non-military intelligent control systems is no longer held back by the prices of control system components (CPU, motor drivers, memory, operating system) and the applicable machine learning strategies are now on GitHub and packaged in Python and Java libraries.
* Many of the current AI beta Q&A are lacking in scientific rigor even though the AI beta is in the **Science** SE category. The use of mathematics is a quality factor in a science site as much as inclusion of academic references or narrowness of the problems set forth in the questions. I think it is correct to assemble AI under **Science** and not **Technology** because the technology side is covered under SE sites such as Arduino and Data Science, which are properly placed in the **Technology** category.
* I don't see a membership-wide interest in, "Questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment." People are aware that some job functions get replaced in the human job market by machines, and we've been, as a culture, adapting to it since the late 19th century. Industrialization saw the emergence automated cotton picking, textile manufacture, type setting, and electronics assembly. Most people, myself included, will actually be happy when office worker and programming is automated. It's boring and no more psychologically healthy than coal mining was healthy for breathing. Our members are not here not to ask questions about how to live or face the changes but to engage in right-now-present-day adaptation to what is so obviously the nearing of another big job market swing.
* For reasons given above, "Cognition," is not the best choice of words to describe the relevant AI research and development under way today. Notice that the word cognition or questions surrounding it are rare in tagging, titling, and discussion. What is currently being synthesized is the lower mammalian functions that do not take place in the cerebral cortex and have no relationship with comprehension, discernment, or insight. AI beta Q&A does not fit into the cognitive science definition of cognition or the dictionary definition of it. For instance, character recognition, visual collision detection, identity recognition, and such are not cognitive functions. Artificial analysis of large data sets, not for training but for feature extraction, is not cognitive either.
There's much more pointing to the inadequacy of the current description but I'll stop there. Back to the central question.
**Is the current out-facing description of the AI meta descriptive of what it is?**
The co-question is this.
**Is discussion about life in a changing world really what is relevant to most people who would search for *Artificial Intelligence* in the search field of SE? And if not, shouldn't we adapt to the real interests of our membership?**
There was an [older AI beta](http://area51.stackexchange.com/proposals/57719/artificial-intelligence) that had a great description for what the current AI beta does, which failed as an SE beta probably only because it was before its time.
>
> Q&A site for theorists, system architects and analysts of intelligent machines and software
>
>
>
---
**Footnotes**
[1] I had to log off to find the AI beta description, which is a bug report I might make, but not directly related to this question. It can be seen when one acts as a non-member and looks up "Artificial" in the SE site list using the search field.<issue_comment>username_1: >
> **Is the current out-facing description of the AI meta descriptive of what it is?**
>
>
>
I think the answer to this is a fairly obvious "no" at this point in time.
>
> The co-question is this.
>
>
> **Is discussion about life in a changing world really what is relevant to most people who would search for Artificial Intelligence in the search field of SE? And if not, shouldn't we adapt to the real interests of our membership?**
>
>
>
This is in my opinion the much more important question. Again, I think the answer is "no". Those topics may be interesting and relevant for some, and it's fine to allow them, but I suspect that much more detailed questions about specific little things in AI are more relevant to more people that happen to find their way onto this site. In my opinion, the description should indeed be adapted to allow more "technical" questions... basically, allow the kinds of questions we see many of. Not necessarily technical in the sense of "why doesn't this snippet of code work", but technical in the sense of "how/why/in what cases should this part of an algorithm work?"
---
A few minor nitpickings from me:
>
> Many of the current AI beta Q&A are lacking in scientific rigor even though the AI beta is in the Science SE category. The use of mathematics is a quality factor in a science site as much as inclusion of academic references or narrowness of the problems set forth in the questions.
>
>
>
I don't agree that this is a problem. StackExchange as a whole (all sites across the entire network) tends to be primarily about "quick" questions and answers, about building a site that people can easily reach through google searches, quickly see a question relevant to their search terms, and quickly find an answer that addresses their needs.
Most questions really don't need answers with a thorough literature review, like a scientific paper would. Some do, sometimes there'll a really great question that is best addressed with a great answer containing interesting references to literature, etc... kind of like how, on StackOverflow, you'll sometimes find great answers with lots of different possible solutions, a lot of work put into timing the different implementations, explaining observed performance differences, etc.
That's certainly not necessary for the majority of questions though. Many more questions are asked by non-experts, or first-year or second-year students for example. They might use slightly incorrect terminology, not be aware of all kinds of other potential solutions, etc. But when they have a clear question about an algorithm they're learning about, they just need an answer to that, they don't need a thorough literature review.
>
> I think it is correct to assemble AI under Science and not Technology because **the technology side is covered under SE sites such as Arduino and Data Science**, which are properly placed in the Technology category.
>
>
>
I don't agree with the bolded part there. I've personally never heard of Arduino, but a quick google search does not tell me how that covers a major part of AI at all, it seems really specific and niche. AI is also much much more than just Data Science. AI includes things like search algorithms, planning, pathfinding, and probably much more stuff that is not Data Science. People need a place to ask questions about all that, and it's not covered by any other StackExchange site.
Upvotes: 1 <issue_comment>username_2: >
> The evidence for the below reasons is clearly evident not only in the
> titles and bodies of the most popular questions and answers but also
> in tag usage, the top ten being these.
>
>
> * Neural networks
> * Machine learning
> * Deep learning
> * CNNs
> * Reinforcement
> * AI design
> * Image recognition
> * Algorithm
> * Classification
> * Training
>
>
>
All these topics are on-topic on <http://stats.stackexchange.com> and <https://datascience.stackexchange.com>. I don't see any point in having <https://ai.stackexchange.com> covering them as well.
Upvotes: 2
|
2018/08/07
| 2,107
| 8,660
|
<issue_start>username_0: *A knowledgeable user graciously submitted this edit for the topology tag, but the scope of the content is such that I wanted to bring it to the community for discussion:*
>
> **USAGE GUIDANCE**
> Topology is the study of geometric constructs and whether they can or cannot arise from other geometric constructs solely from stretching, contracting, bending, or twisting. In the context of AI, the topology correlates closely with the categorical capabilities of the system.
>
>
>
>
> **TAG WIKI** Topology is the study of geometric constructs as to whether they can or cannot arise from other geometric constructs solely from stretching, contracting, bending, or twisting. See the Miriam-Webster online dictionary definition of *topology* below.
>
>
> In the context of AI, the topology correlates closely with the categorical capabilities of the system.
>
>
> **Correct Use**
>
>
> The feedback of signals from the output of a network or a layer to a previous point in signal flow is a topological feature involving splitting and joining which matches the use of the term in mechanics, mathematics, semantic web analysis, and IT network provisioning.
>
>
> Gating and Attention organelles are topological.
>
>
> The signal paths that create the adversarial balance between generative and discriminative networks in the GAN design create a topology that is unique to collaborative circuit pairing.
>
>
> **Misuse**
>
>
> Although the term *topology* has been used in relation to the number of activation elements in a neural network layer, there is an undeniable semantic flaw with that usage:
>
>
> Neural networks form a geometry of discrete elements without meaningful size attributes. Activations and input attenuation parameters cannot be stretched, contracted, bent, or twisted meaningfully. Moving a vertex with the edges still attached in the graphic representation also lacks functional meaning.
>
>
> The only possible meaning of morphing a neural network layer is to change the quantity of its array elements, which means that the artificial neuron counts in the layers CANNOT be part of a network's topology if the word topology is to delineate anything at all.
>
>
> **Only Possible Logical Conclusion**
>
>
> If topology is to be applied to AI design, only features that cannot be changed solely by modifying the dimensionality of one or more arrays in the design can qualify as topological features.
>
>
>
---
>
> [The Miriam-Webster online dictionary definition of Topology](https://www.merriam-webster.com/dictionary/topology)
>
>
>
```
a :
1. a branch of mathematics concerned with those properties of geometric configurations (such as point sets) which are unaltered by elastic deformations (such as a stretching or a twisting) that are homeomorphisms
2. the set of all open subsets of a topological space
b : configuration — topology of a molecule topology of a magnetic field
```<issue_comment>username_1: *Disclaimer: NNs are not my area, and I only have a general grasp of General Topology. I come to AI via combinatorics and combinatorial games which influences my perspective.*
* Wondering if we should mention/emphasize "[network topology](https://en.wikipedia.org/wiki/Network_topology)"
When researching the field of topology, I noticed it did not deal with certain aspects of geometrical game boards, which are more in line with the "proto-topology" pioneered by Euler with the [Bridges of Königsberg](https://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg). (For instance, you can add or remove playable cells from a Chess or Go board, which alters the structure in terms of number of connections *(the network topology. From this standpoint, topological alterations to game boards potentially break game solutions, which may have an impact on AI performance/strength.)*
Also wondering how similar the network topology usage is re: NNs
Upvotes: 0 <issue_comment>username_2: I **strongly recommend against** using this tag info, for the following reasons:
1. Tag info should be **easily understandable**, provide **clear information** that tells a user whether or not to use that tag / whether or not it's relevant for their question.
2. Tag info should be **unambiguous**, **correct**, and **not be up for debate**. The information in there should be "generally agreed upon" by people familiar with the relevant field(s) to be true.
I don't think either of these points are satisfied here.
---
For the **first point**, try reading through that text once, from top to bottom. Do you feel like you're now well aware of when the tag is or isn't applicable, what it's about? I certainly don't. I have the following concerns here:
* Usage guidance doesn't really tell us for which questions it should or shouldn't be used. It starts out with a bunch of fancy words that don't tell me anything about its relation to AI. It ends by telling us that "topology" is supposedly "closely related" to something, but still don't know **what it is**.
* Again, the main text / tag wiki doesn't provide clear information either. Again, lots of fancy words, but not much real information (definitely not **clear** information).
* The tag wiki gives some examples of things that are "features of topology" or are "topological", but we still don't have a clear description of what it's supposed to be.
---
For the **second point**, I have the following concern:
* It dives straight into "Correct Use" and "Misuse" headers, which is already hinting at the definition being up for debate, having multiple uses, being potentially ambiguous or not generally agreed upon. More importantly, **as someone familiar with Neural Networks this might be plain incorrect according to my experiences**. I say "might be" rather than "is" because the text is so incomprehensibly complicated that I can't tell for sure what it's actually saying.
In general, in AI, when people are talking about "topology" in the context of Neural Networks, it's used to describe the "architecture" of the Network; how many layers, what types of layers, how large is each layer, what activation functions do we put in between, where are the connections (typically a feature of layer type). That's basically it, and that can be explained very clearly in language that can be understood easily. Some sources:
* [A quora question](https://www.quora.com/What-is-the-difference-between-neural-network-architecture-and-topology)
* [A well-known paper on using evolutionary search to optimize a Neural Network's topology](http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf) (indeed, it's evolving the "structure" of the neural network).
Many more similar sources can be found through a google search for "Neural Network topology".
That is specifically in the **context of Neural Networks**. This will likely be the most common natural usage of the tag on this site. However, as also mentioned in the proposed tag wiki, **topology is also a completely different field of mathematics**. And [that field of mathematics may sometimes be relevant in a completely different manner in the field of AI](https://www.quora.com/What-has-topology-got-to-do-with-machine-learning). So, "topology" can be **ambiguous**, and **probably should not be restricted only to the usage in the context of Neural Networks**.
---
As a final concern, I am wondering what a header saying **"Only Possible Logical Conclusion"** is doing in a tag wiki. That's a header I'd expect in an opinion piece, or maybe as an overly-sensationalized header after a mathematical proof. This is not a header that belongs in a neutral, informative Tag Wiki.
---
Now, given the use of language, I know immediately precisely which person proposed that tag wiki edit. For context, I think it's important to note that **I've previously had a long discussion with this user concerning terminology**, [which can be read here](https://chat.stackexchange.com/transcript/81180).
Note that, in that discussion, it becomes very clear that this particular user is knowingly and **actively trying to push the usage of new terminology that he personally believes to be "better" than commonly-used terminology across the entire field**. That is fine, he can do that if he likes, even on this site by e.g. asking questions like "Wouldn't X be a better term instead of Y because reasons Z?" But this should not be done through tag wikis. **Tag wikis should be consistent with language used commonly across the field**, otherwise every single non-expert user visiting the site (and maybe even expert users) is going to be confused.
Upvotes: 2
|
2018/08/09
| 504
| 2,094
|
<issue_start>username_0: To give some background, recently I came across an answer which used some religious stories to explain a question. I reported it as spam as it contained link to a personal blog (it got consequently removed).
The answer in itself was faulty as it did not properly cite the sources of the religious stories. I would not have raised a flag if the person had cited the sources correctly.
My question should we allow answers based on religious sources in this stack? My opinion is since we ourselves have no real true understanding of AI one should allow the religious sources to be counted as philosophy, since it is speculative. What are your thoughts on this matter?<issue_comment>username_1: Our [justification policy](https://ai.meta.stackexchange.com/q/1285/75) requires that speculative answers give some justification or reasoning for their assertions. I originally intended it for things like this hypothetical (and somewhat hyperbolic) exchange:
>
> Q: What is the risk from widespread deployment of self-driving cars?
>
>
> A: Self-driving cars will work fine for a while, gain sentience, turn malevolent, wait for a perfect opportunity, and kill us all. This sequence of events is 100% certain.
>
>
>
Wild speculation requires some sort of justification. It still might not be correct (votes can indicate accuracy/reasonableness), but there must be some explanation of how the answer author arrived at their conclusion. Citing sources is a great way, but not the only way, to provide that. It seems to me like drawing on philosophy/religion is a decent method to explain one's position.
Regarding the specific answer you discuss: I deleted it not for lack of explanation but because — despite its considerable length — it didn't actually address artificial intelligence.
Upvotes: 3 [selected_answer]<issue_comment>username_2: My sense is that answer that provide a religious perspective can be on-topic for certain issues related to social or philosophical subjects, but do need to be well supported, and ideally should be well referenced.
Upvotes: 0
|
2018/08/18
| 631
| 2,827
|
<issue_start>username_0: Recently I [flagged](https://ai.stackexchange.com/questions/7602/how-can-i-train-model-to-extract-custom-entities-from-text) a question as more suitable for Data Science Stack Exchange and should be migrated there. However it was declined. To me this is clearly a Data Science question (no ambiguity). Did I make a mistake in my judgement or are we entertaining Data Science questions also?<issue_comment>username_1: I personally feel like most Data Science questions would be just fine on AI too, Data Science and AI simply are very closely related. The only argument against having Data Science questions on AI.se that I'm aware of basically boils down to trying to avoid as much overlap as possible.
From my point of view, that kind of overlap really isn't too much of a problem. The topic in the question (entity recognition from text) is certainly a topic that could be described as being a part of "Artificial Intelligence", and it would be just as correct as saying it's a "Data Science" topic. So I personally really wouldn't mind if it's allowed on either site, I can see it fitting in either just fine. I understand that StackExchange as a complete network might find it more problematic if there's too much overlap, and if that's the case their opinion is probably more important than mine, I just don't experience it as problematic personally.
The only sentence in the question you linked to that is maybe a bit questionable in my opinion is the following:
>
> I have tried Spacy and NLTK for entity extraction but that doesn't suffice above requirements.
>
>
>
That sentence is describing specific tools/frameworks, and implies the question-asker might be looking for more names of similarly specific tools/frameworks. I do feel those kinds of questions would be a better fit for Data Science.
But the same question, especially if you ignore that one sentence, can easily be interpreted as being of a more conceptual nature, asking more generally about techniques/algorithms that would be applicable. It looks to me like both of the current answers also interpret the question in that way. Such "conceptual" questions would be just fine here in my opinion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To second username_1' answer:
Much, but not all, of Data Science relies on AI tools. When the question relates to AI, we should answer it.
Some examples of Data Science topics that are *not* about AI, and which we should migrate are:
* Questions about scraping data from the web.
* Questions about hypothesis testing or other conventional techniques from statistics (unless about the evaluation of ML methods).
* Questions about programming languages or toolkits within those languages, that focus on syntax or programming rather than AI/ML algorithms.
Upvotes: 2
|
2018/08/29
| 1,277
| 4,625
|
<issue_start>username_0: If you take a look at our states on Area51, the two big areas that are still a problem for the beta are the rate of unanswered questions (which is close to 25%), and the average number of answers per question (which is about 1). These are related statistics.
Looking through our queue of unanswered questions, there are some that might be answerable, but that I cannot answer, and many that look unanswerable or very low quality. Many of the questions are very old.
Should we make a more concerted effort to close questions that are old **and** unanswerable? What criteria should we use to decide if a question meets that standard?<issue_comment>username_1: If a question is unanswerable, it should be closed, be it old or new. This is more or less what closing is for.
But don't do it for the sake of Area 51 statistics. Those statistics outlived their usefulness, as did Area 51 itself. The post [Graduation, site closure, and a clearer outlook on the health of SE sites](https://meta.stackexchange.com/questions/257614/graduation-site-closure-and-a-clearer-outlook-on-the-health-of-se-sites) explains that already in 2015, those stats did not really matter for site graduation or closure.
Upvotes: 2 <issue_comment>username_2: Speaking as a pro tem mod, we see a lot of single close votes, but tend to give the OP the benefit of the doubt, and err on the side of caution.
My feeling is the best method to increase closure of these "grey area" questions is to keep attracting knowledgeable contributors, and supporting those contributors by upvoting good questions and answers, so that more users have informal moderator privileges. (i.e. I'm personally more comfortable with closures being consensus-based because, as JD notes, it can be a difficult determination, even for qualified individuals.)
That said, I'd like to prune away as much of the noise and dead-weight as possible to improve our stats. I'm wondering if we might start a chat thread to address questions in limbo, so that if contributors make a strong case for closure, the moderators can more confidently take action.
Upvotes: 1 <issue_comment>username_3: I make a point of visiting the unanswered queue on all sites that I am active on. It's possible to earn an [Explainer](https://ai.stackexchange.com/help/badges/90/explainer), [Revival](https://ai.stackexchange.com/help/badges/64/revival), [Necromancer](https://ai.stackexchange.com/help/badges/17/necromancer) or other badge available to new questions.
We should run through the queue when we visit here.
The site [Interpersonal.SE](https://interpersonal.stackexchange.com/questions?sort=unanswered) has an unanswered queue style similar to ours (single tab), while [LifeHacks.SE](https://lifehacks.stackexchange.com/unanswered/tagged/?tab=noanswers) has the advantage of a multi-level Unanswered Questions queue; with additional tabs for "my tags", "newest", "votes" and "no answers" permitting better differentiation. Both those sites have a similar total number of questions as we do, yet the number of unanswered questions is near zero.
The remaining question is do we want a multi-level queue like LifeHacks has? I'm new to AI.SE, so I'd prefer a senior member put in a feature request over at [meta.SE](https://meta.stackexchange.com/search?q=unanswered%20questions).
Be certain to improve and better these similar requests that became ignored or status-declined:
* [Improving navigation around unanswered questions](https://meta.stackexchange.com/questions/8506/improving-navigation-around-unanswered-questions)
* [How to search unanswered questions](https://meta.stackexchange.com/questions/16542/how-to-search-unanswered-questions)
* [Tab for questions that are labeled with favorite tags](https://meta.stackexchange.com/questions/11563/tab-for-questions-that-are-labeled-with-favorite-tags)
* [Are unanswered questions a problem yet?](https://meta.stackexchange.com/questions/143113/are-unanswered-questions-a-problem-yet)
* [How should users handle unanswered questions?](https://meta.stackexchange.com/questions/159964/how-should-users-handle-unanswered-questions)
Fortunately, [automatic deletions](https://meta.stackexchange.com/a/92006/282094) are performed on old questions:
>
> If the question is more than 365 days old, and ...
>
>
> * has a score of 0 or less, or a score of 1 or less in case the owner's account is deleted
> * has no answers
> * is not locked
> * has view count <= the age of the question in days times 1.5
> * has 1 or 0 comments
> * isn't on a meta site
>
>
> ... it will be automatically deleted.
>
>
>
Upvotes: 1
|
2018/09/05
| 1,305
| 5,336
|
<issue_start>username_0: *Earlier this year we announced a "site sponsorship" program aimed at bringing in industry and project-team leaders to help support these sites. A lot of your questions about this program can be answered through the posts and comments there:*
***[Site Sponsporships — Bringing Resources Back to Stack Exchange](https://meta.stackexchange.com/questions/307861/sponsorship-pilot-bringing-resources-back-to-stack-exchange)***
---
Today I am excited to announce that **IBM has generously agreed to sponsor the Artificial Intelligence site** by becoming a partner with the AI community — to encourage innovation within this space by helping developers solve the complex problems facing this field.
The primary focus of a sponsorship is to help bring more resources to this site. IBM is currently working with Stack Exchange to promote this site at their [**Winning with AI conference**](https://www.ibm.com/analytics/win-with-ai/) on September 13, 2018 in New York City. IBM will also be advertising on Stack Overflow to drive further traffic and usage back to the AI site. IBM will also bring various dev teams throughout their organization (many with communities of their own) to participate on the AI site, and to help expand awareness of the industry itself.
How does a sponsorship affect this site?
----------------------------------------
Site sponsorships are administered much like the "tag sponsorships" you may have seen on other sites. Apart from the visual updates and site promotion, you should not see any significant changes in the scope or the operation of this site. Site sponsorship is essentially "strategic philanthropy" where industry partners like IBM can give back to developer communities by having a presence on the site, and to provide a place to help ask and answer questions.
Let's get a few immediate concerns out of the way
-------------------------------------------------
First — sponsors do not "own" these Q&A sites. Sponsors work *alongside* our communities who ultimately build these sites. Communities ask the questions; communities create the tags; communities conduct elections as they do now. Any ads a sponsor might submit still has to go through our crazy-strict ad editorial process… as it has always been. Companies do not have access to personal data, and all Q&A content remains irrevocably licensed under Creative Commons for sharing and attribution.
I am energized about the potential for working with companies like IBM as a way to expand our sites' growth and to help bring in new communities, and maybe even build out some new features for Q&A sites like this. Every site will ultimately benefit.
On a personal note, I continue to be impressed by just how attuned our marketing team and partners have been to the concerns of our Q&A sites. We will work hard to find organizations who are willing to cede so much control back to the community. It can be difficult to anticipate all the hiccups we might encounter along the way, but we remain steadfast in the guiding principle that these ideas should NOT interfere with the main experience of the Q&A, and IBM seems to fit that relationship and expectation to a T.
*Enjoy!*
<issue_comment>username_1: This is great news. We may still be in Beta, but hopefully not for much longer! I'm taking it as a good omen that a foundational company in the field of AI wants to be associated with our dynamic and growing Artificial Intelligence stack!
Upvotes: 3 <issue_comment>username_2: I do appreciate the IBM work!
However; *How do this newly sponsored AI site differ from the other sites that have been parterned with SE,like AskUbuntu*
Is creating a new site only in this case, as a proof of concept? And then if it works, in the future existing sites will be sponsored? Otherwise I don't think I understand how sponsorship equates to "bringing resources back"; creating more sites sounds like spreading resources out or else this is more of a "we want to advance knowledge in our field for altruistic reasons" thing for companies, or is this supposed to give them a profit eventually.
According to my analysis,basing on the snapshot of the site in the question body,A sponsorship generally entails enabling ads relevant to the subject and affixing a small "sponsored by..." logo in the upper-right corner, and i do think that's why *ibm* is here!
**Anyways,appreciated approximately.**
Upvotes: 2 <issue_comment>username_3: Slight concern: Does this mean that a new moderator will be appointed in AI.SE? Some of us here are beginners and may ask apparently stupid questions and answers. The new moderator might close or delete such questions and answers.
The current moderators understand these concerns and have a very good moderating policy on such type of questions and answers.
Thought I would add some links:
[Are we too fast downvoting questions, especially for newcomers?](https://ai.meta.stackexchange.com/questions/1313/are-we-too-fast-downvoting-questions-especially-for-newcomers)
[13 out of the 15 questions on the front page right now are -1 or lower score: This site needs a broader scope or it's doomed](https://ai.meta.stackexchange.com/questions/1289/13-out-of-the-15-questions-on-the-front-page-right-now-are-1-or-lower-score-th)
Upvotes: 1
|
2018/09/06
| 1,179
| 4,933
|
<issue_start>username_0: [This question](https://ai.stackexchange.com/questions/3343/what-are-the-latest-methods-to-train-a-chat-bot) was recently bumped to the front page. On some other SEs, the words "latest methods" would be a red flag and potentially cause it to be closed because any answer would have to be updated to continue to be true. The general philosophy on those other sites is that questions should be answerable in a fashion that is going to be useful not just to the questioner, but also to future visitors to the site. Indeed, one of the major uses of the SE network is as a long-standing repository of knowledge for future questioners.
Although new things can always be discovered that invalidate old answers that pre-date them, questions like the one I link to are going to attract answers that are always going to be going out of date. There is no way for an answerer to answer this question in a way that will be useful and true for a reader in six months or a year.
Is that an issue here, or are we fine with questions whose answers will inevitably become out of date?<issue_comment>username_1: I'd personally hesitate to declare questions off-topic just because the "correct" answers to them are highly likely to change over time. Indeed, this is going to be the case for "state-of-the-art" questions, especially considering how rapidly research in the field is moving and how rapidly the state of the art changes. I personally still feel like such questions aren't overly problematic because:
1. They are likely a class of questions that is the most interesting for some people. In a field that moves this rapidly, and where young people are newly entering the field also at increasing rates, there is a lot of interest in knowing "what is the state of the art right now?". If there is a lot of demand for such questions, it makes sense to have a place where they can be asked to me.
2. The web interface of the site already puts timestamps (dates) on questions and answers. Future visitors will be able to see (if they pay attention) if an answer they've run into is rather old.
3. In the future, if the state of the art has significantly changed, people are free to provide new answers or add comments to existing (outdated) answers. If the people who wrote the original outdated answers are still around, they can also edit them. See, for example, [this old question on stackoverflow](https://stackoverflow.com/q/6542274/6735980). It was originally asked 7 years ago, and was about the feasibility of training an Artificial Neural Network to play a complex video game like Diablo 2. At the time, this was highly unlikely to be feasible. However, we see some answers being written a few years later, and also see many edits in the question itself and in older answers, to reflect progress in the field.
Upvotes: 2 <issue_comment>username_2: There is a difference between disproven and out of vogue. What is proven false, if the proof stands up to thorough scrutiny is unlikely to have any future value other than to demonstrate how some things that were once widely believed may be later disproven. These are some examples.
* The sun travels around the earth.
* Air, earth, water, and fire are the four elements.
* All propositions within a mathematical system can be proven or disproven.
* All natural phenomena can be placed in algebraic closed form.
Many things that were thought absurd have been proven.
* Mercury is travelling too fast for its orbital path to be predicted by Newton's laws.
* Humans can survive in space and return alive.
* Computers can accurately and reliably sort mail with handwritten addresses.
* Computers can be trained to generate pictures of interior designs.
However, very little in a Q&A community are theorem that can be proven or disproven though. Much of what is discussed is technique (in the Jaques Ellul sense of the word) that may fall into voge and then out of vogue more than once. Something that is thought to be obsolete (but not formally disproven) may rise back to common use or may return slowly from obsolescence over decades. Here are just a few examples of this toggling evident today.
* Earth is round -> earth is flat -> earth is round
* Vector graphics -> raster graphics -> vector graphics
* Ether -> no ether -> ether
* Turmeric -> chemotherapy -> turmeric
* AI via imitating biology -> AI via logic -> AI via imitating biology
Given the history of science and technology, unless we can flat out disprove an answer we cannot, solely on the basis of its current apparent obsolescence, assert that it will never return to the forefront. It is rather highly probable that some thing we now consider obsolete will become a key element in the furtherance of one of the sub-fields of Artificial Intelligence.
Others in the future may look back and consider us ignorant for our current belief that some solution or approach is obsolete.
Upvotes: 1
|
2018/09/06
| 383
| 1,285
|
<issue_start>username_0: Right now, we have a tag called [theorics].
"Theorics" is kind of a dubious word. I've never heard the word used anywhere but this very site. [Wiktionary says](https://en.wiktionary.org/wiki/theoric#English) that the word is "obsolete". [Google Ngram Viewer](https://books.google.com/ngrams/graph?content=theory%2Ctheorics&year_start=1800&year_end=2000) shows that the word "theory" is *over one hundred thousand times* as common as "theorics".
Should we change it to "theory" or something?<issue_comment>username_1: I like **theory**.
I recently [made a post](https://ai.stackexchange.com/questions/7861/what-does-hard-for-ai-look-like) and was highly surprised to find out that "theory" wasn't a tag option. I have never heard of the word "theorics" before in my life and only found it by looking at the list of tags. **Edit:** I didn’t even realize it was “theorics,” I thought it was “theoretics” which sounds more like a word to me.
Speaking as a native English speaker whose degrees are in *math* and *philosophy* I feel quite inclined to say that a word for “theory” that I’ve never heard in my life is unlikely to be widely known :P
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yeah, changing it to "theory" sounds good to me.
Upvotes: 2
|
2018/09/20
| 1,865
| 7,585
|
<issue_start>username_0: This is a question to ask the social engineers employed by those who own SE and SO, but I would like to vet it here before doing that.
**Background**
Consider that the SE/SO structure is, from one perspective, a game, whether or not it was intended to be. In the context of the best of the social networking models that have obtained some success on today's web, all of which were derived from or influenced by Morgenstern and von Neuman's *Game Theory*, gaming the system is what contributors to the content of the site do.
From a systems analysis perspective, whether their intention is any of the following or some proportional combination of them, playing for some objective can be proven as the prime motivator for all SE engagement.
* Obtaining an answer for use in a project or to satisfy an interest
* Educating one's self by writing and evaluating responses
* Educating others out of purpose driven or altruistic motivation
* Gaining reputation to be seen in a community as an expert
* Writing because of satisfaction in writing
* Interest in public affirmation of intelligence or expertise
* Intrinsic value of high numeric reputation resulting from good PR
* Addictive compulsion lacking a cognitive cause
* Some other reason
The SE/SO system has adapted for the purposes of growth, and it works as is. The original domain stackoverflow.com is rated 64th globally by Alexa, and the AI beta's main domain stackexchange.com is rated 127th.
In this context, the deltas aggregated in member reputation incentivizes behaviors that cause the increased growth of SE and SO through the system.
**How This Social Model Applies to Down Voting**
There are two distinct types of down votes.
* Anonymous down vote
* Down vote with associated reasoning for it
Both have purpose in that the identify a perceived error or inappropriateness of the Q or A. Both currently have the same negative affect on the voter's reputation.
Is this optimal?
**Critique from a Social Network Equilibrium Perspective**
The advantage of the SE/SO system's game objective to humanity is that it helps the global development and dissemination of subject specific knowledge. From a house (SE/SO business) perspective, it is to improve the domain's rating globally.
With regard to both of those game objectives, a down vote with an associated reason has greater value than an anonymous one in two respects.
* Anonymity decouples the down vote from an ethical incentive.
* The expression of reasons provides additional information to both writer and the entire public readership.
The current SE system merely indicates to the down voter that a reason in the comments is preferred. It makes much more sense from an optimization point of view to use the reputation system to simultaneously.
* Incentivize against down voting with ulterior motives.
* Incentivize *for* information transparency.
The first is part of civilization in that those indicted can face their critic in academia and face their accuser in court. The legal ethics behind this has much to do with the necessary checks and balances in civilized social structure. The asymmetry of accountability in being able to dispute an answer and not being required to specify why leads to uncivilized behavior, which is why academia and ethically evolved legal systems do not permit it.
**Potentially Beneficial Change**
The overall metrics of satisfaction, positive impact on global understanding, and the value of the two domains (SO and SE) would likely be improved if transparency was not sacrificed for the sake of anonymity.
This is a potentially beneficial change.
>
> Anonymous down votes would still be permitted but a higher integer value would be subtracted from the voter's reputation than 1.
>
>
> Reason associated down votes would not subtract anything from the voter's reputation, since the action has as much a reason to incentivize as a reason to incentivize against.
>
>
>
**A Possible Implementation**
To implement such, in addition to the verbal encouragement of adding a comment when a down vote is cast, the comment field labeled, "Reason for down vote," could be added with a minimum required number of words, the provision of which differentiates the two cases so that reputation adjustment could reflect the more optimal incentive model regarding down voting.
**Transparency With Anonymity**
Down voting transparency and anonymity could easily be achieved concurrently by creating an Anonymous user for such purposes. This solves the problem of retaliatory conduct, but it may not solve the problem of asymmetric accountability, where one can harm another without any risk of consequences, which is a policy not found in well developed systems outside the SE/SO space.
The question then becomes, why did academia and ethical judicial systems require transparency without anonymity except when a child or undercover public servant is involved?<issue_comment>username_1: I and many others would be very happy if people explained their downvotes more often. Essentially this request has been [already been discussed on Meta Stack Exchange](https://meta.stackexchange.com/q/135/295684). As a result, the "please consider adding a comment if you think this post can be improved" pop-up was added. However, adding an impact on reputation to commenting would damage anonymity and/or produce a spew of useless comments:
>
> I enjoy being able to down-vote posts I don't care for without worrying about retaliation. And I really enjoy being able to leave honest comments without worrying that they'll be justifiably interpreted as evidence that I've down-voted. I would not like to see the two systems linked.
>
>
>
—[Shog9](https://meta.stackexchange.com/questions/135/encouraging-people-to-explain-downvotes/2373#comment313_135)
>
> The so-far-insurmountable problem is preventing users from just keyboard bashing "aassdgfd" if forced to type something.
>
>
>
—[bananakata](https://meta.stackexchange.com/questions/135/encouraging-people-to-explain-downvotes/2373#comment1384_135)
Therefore, Stack Exchange seems to have decided not to implement further changes, and will probably not do so in the future.
Anonymity is important to allow people to vote as they believe without fear of retaliation (in the form of revenge downvoting). Stack Exchange's model has always been that people can vote however they like as long as they're not targeting specific users. A single user's votes might not be very illuminating, wisely cast, or [explicable at all](https://meta.stackexchange.com/a/215397/295684), but at scale votes usually average out to good rankings.
Upvotes: 3 <issue_comment>username_2: I think the number of votes is also a factor. Right now, on SE:AI, we have relatively low voting participation. This makes us a tough stack to build rep on, but it also makes the solo downvotes stick out.
Compare to a stack where there questions and answers receive a large number of votes quickly. When voting activity is high, the random downvotes have less of an impact.
So, in some sense, the solution is to keep working to attract users, and boost the voting levels.
---
I have seen what I believe to be pro forma, serial downvoting in the past on SE:AI. My remedy for that has been to look at all new questions every day *(been slacking lately, admittedly,)* and make a point of upvoting questions I think have been unfairly downvoted.
With answers, it's a little tougher b/c one doesn't want to up or downvote without a high degree of confidence.
Upvotes: 2
|
2018/09/23
| 984
| 3,931
|
<issue_start>username_0: Per Moderator @BenN's request in [this thread](https://ai.meta.stackexchange.com/questions/1422/how-can-we-change-the-site-description-to-match-our-current-topic-guidelines-an/1423?noredirect=1#comment1708_1423) to change the site description, we need to open a new thread and vote on new suggestions.
Please propose site description texts exactly as past users did in [this](https://ai.meta.stackexchange.com/questions/1197/how-can-we-quickly-describe-our-site) older thread, and vote on the suggestions of other users. After a reasonable consensus is reached, the moderators will update the site descriptions to match the top voted answer.
For people new to this process, most of the AI SE suggestions in the past follow the convention of the descriptions of most SE sites and start with something like, "Artificial Intelligence Stack Exchange is a question and answer site for ..."<issue_comment>username_1: Artificial Intelligence Stack Exchange is a question and answer site for ...
>
> people interested in artificial intelligence theory, design,
> development, **practice**, **research**, and policy.
>
>
>
I like @DouglasDaseeco's answer, but I'm among the users who think that practice, *and even code*, have a place here. Presently users post questions containing code, and I and others answer them, so I think this description is more accurate.
While the founding moderators' intent was to exclude questions that overlapped with other sites (notably Data Science & Programmers.SE), the boundaries are quite porous in practice, and if we want to claim to be a useful place for AI related Q&A on the web, I think we need to accept practical questions as well.
Some examples of coding questions with no other place to go include:
[Keeping track of visited states in Breadth-first Search](https://ai.stackexchange.com/questions/7555/keeping-track-of-visited-states-in-breadth-first-search/7560#7560), which is about the proper data structures to use in a search algorithm. It doesn't belong in Data Science, since it is related to GOFAI and not machine learning. It doesn't really belong in Programmers.SE, because it isn't a generic question about programming, it's related to understanding the algorithm. It seems to clearly belong on this site, and yet it includes code and is about practice.
[Snake game: snake converges to going in the same direction every time](https://ai.stackexchange.com/questions/7940/snake-game-snake-converges-to-going-in-the-same-direction-every-time/7941#7941) This question was about the implementation of a reinforcement learning algorithm. The question again has nothing to do with Data Science. It involves programming, but the users' problems were not related to understanding how to program, but to understanding the algorithm (and, as it turned out, the exact behaviour of a particular algorithm for training neural networks). This user is not likely to get useful answers on Programmers.SE. It seems to clearly belong on this site, and yet it also includes code and is about practice.
Upvotes: 2 <issue_comment>username_2: I think we should also provide guidance to users about which questions may be more suitable for Data Science, Overflow, etc.
Upvotes: 2 <issue_comment>username_3: The two leaders are ...
>
> people interested in artificial intelligence theory, design, development, practice, research, and policy.
>
>
>
... and ...
>
> people interested in embedded, mathematical, cognitive, and discovery centered artificial intelligence research and development.
>
>
>
... so I propose the union.
>
> **people interested in AI theory, mathematics, research, discovery, design, development, practice, embedded uses, cognition, policy, and impact.**
>
>
>
---
This one is inclusive and dodges the terms *statistics* and *data science* which are the explicit domains of established SE siblings.
Upvotes: 2 [selected_answer]
|
2018/10/03
| 739
| 3,121
|
<issue_start>username_0: I'm wondering if this should be a single tag. However, if we do keep them as separate tags, how do we disambiguate?
[natural-language](https://ai.stackexchange.com/questions/tagged/natural-language)
[natural-language-processing](https://ai.stackexchange.com/questions/tagged/natural-language-processing)<issue_comment>username_1: It looks to me like they are both used for similar questions, and based on the current Tag Info for the two tags they don't really appear to be different either. So, based on current tag usage, I'd argue that they should be combined into a single tag (which, in my opinion, should be `natural-language-processing` because that's the full term that everyone in the field uses in my experience).
I suppose that, in theory, `natural-language` could refer to something else than NLP... like, it could be for questions about language itself, rather than questions about processing (generating and/or understanding) language. I have a very difficult time imagining any such questions would actually be on-topic for AI though.
Upvotes: 2 <issue_comment>username_2: There is a key difference between the two terms. Whether this finer level of granularity is useful in the tags, I have no opinion.
Natural language is concerned with the general idea of conveying ideas via vocalization and the comprehension of the idea by a listener.
Natural language processing sounds more well defined, but it is actually poorly defined and the definitions in the literature are scattered between these two extremes:
1. Parsing text into linguistic structures.
2. Linguistic processing components in chat-bots designed to replace human experts.
What is included in NLP?
* Talking?
* Generating text?
* Generating linguistic associations?
* Parsing text?
* Hearing?
* Listening?
* Dialog?
* Topic detection?
* Cognition?
* Story invocation? — See Schank.
* Translation?
Depends on who is teaching and when. I don't even see any consistency between the same person's view of NLP over time.
If addressing the terms literally, natural language is simply the field of linguistics minus the addition of formal languages. NLP would then become the action that occurs when some system deals with natural language at either its input, its output, or both.
I saw those two tags earlier today. I don't have a recommendation as to whether to combine them or leave them alone.
Upvotes: -1 <issue_comment>username_3: Natural language *by itself* (that is, without considering computation-related aspects) has little to do with AI. In AI, we want to do NLP, which can be based on natural language, but the tag [natural-language-processing](https://ai.stackexchange.com/questions/tagged/natural-language-processing "show questions tagged 'natural-language-processing'") or [nlp](https://ai.stackexchange.com/questions/tagged/nlp "show questions tagged 'nlp'") should also include these related discussions or questions. So, the tag [natural-language](https://ai.stackexchange.com/questions/tagged/natural-language "show questions tagged 'natural-language'") should not really exist.
Upvotes: 0
|
2018/10/06
| 1,009
| 4,336
|
<issue_start>username_0: I was quite surprised to see this question migrated from AI Stack Exchange to Data Science:
* <https://datascience.stackexchange.com/questions/39279/dqn-cannot-learn-or-converge>
There are two reasons that I am surprised:
* In my opinion, Reinforcement Learning is not really a data science or statistics subject. Some of the toolkit is the same (mainly neural networks), but the resulting system is different.
* In my opinion, the AI stack exchange is where I would *expect* to see practical discussions of agents that learn how to behave rationally in environments. This encompasses RL and other approaches to creating behaviours or policies, such as NEAT.
In fact I have just encouraged a recent poster with a practical RL question on Cross Validated (why is a Q learning on Towers of Hanoi not working) to post here . . . should I have done? Do we want that kind of question?
I would like to open a discussion:
>
> What makes a question which is obviously about Reinforcement Learning on or off topic at AI Stack Exchange?
>
>
>
Some thoughts:
* Are questions about implementing RL algorithms with code snippets on-topic here (assuming code problems are not trivial such as Python syntax errors)?
+ The example question that was migrated to DataScience is in this category.
* Are questions about theory of maths behind RL on topic here, such as understanding proof of the Policy Improvement Theory or deriving Policy Gradients?
+ The maths of RL is easily as complex as anything discussed on CrossValidated. In fact Cross Validated already has many RL questions about these topics. Are they really statistics questions though - should they in fact be migrated *here*?
My personal opinion is that both kinds of questions should be on-topic here. In fact I am hard pushed to come up with an RL question which would not be on topic. That doesn't prevent some subset of RL questions being on-topic elsewhere too. But here is where I would expect them *all* to be on-topic. That is not to say that I would expect them all to remain open or get answers - some might be low quality or not answerable for other reasons.
But is my opinion out of step with others on the site? Have I made some incorrect assumption about the scope of this stack?<issue_comment>username_1: The migration of this question to datascience seems **really strange** to me. Like you said, RL really is pretty much **the furthest removed from data science out of all Machine Learning topics**, even if it were off-topic on AI for whatever reason, it certainly wouldn't be on-topic on Data Science.
To address specifically the question in the title, the I'd say pretty much any Reinforcement Learning question is on-topic on AI.se (AI certainly seems a better fit for RL questions than either stats.se or datascience.se), **except maybe** questions that are 100% clearly about programming issues/bugs. For example, a question like "My RL algorithm is crashing, here is the stack trace, what's wrong?" Such questions might be a better fit on StackOverflow (still **not** datascience).
This particular question that got migrated **might** fit that description... but it's not certain. The question-asker is not certain if it's a bug in their code, or if there is some issue in choice of algorithm for this particular environment or something along those lines. In my opinion, whenever there is that level of uncertainty, the question is likely to require expertise in AI (specifically, in RL), not just programming expertise because it might not be just a programming issue. That makes, in my opinion, AI.se a better fit than StackOverflow.se (or any other site).
Upvotes: 2 <issue_comment>username_2: I also think every RL question should be on-topic here. Funnily enough, I've been flagging posts for migration to DataScience or CrossValidated according to the [help page](https://ai.stackexchange.com/help/on-topic) that defines what is off-topic. But it seems like not everyone really abides by those definitions! I regularly see both implementation and mathematics questions here, related to RL and otherwise. I've stopped flagging these questions because I enjoyed reading and answering them.
So. If no one wants to abide by our current definition of 'on-topic' (including me), we should change it, right?
Upvotes: 1
|
2018/10/16
| 336
| 1,546
|
<issue_start>username_0: Linear algebra fits clearly into the field of mathematics and doesn't have anything particular to do with AI except that linear algebra might be used as part of an approach, but only to the extend that set theory or vector multiplication might.
Linear regression is a Statistics 101 curve fitting method, with simple formulae for slope, intercept, and correlation coefficient. A basic 1980 pocket calculator has a button for it.
Neither of these tags are particularly AI centric.<issue_comment>username_1: They're both core topics that are important to understand well, very important basics, before people can move on to a plethora of more advanced topics in AI. So yes, they absolutely should be tags in AI.se.
Upvotes: 2 <issue_comment>username_2: It depends on how the questions are asked. Linear Regression is the foundation for many different machine learning and artificial intelligence algorithms. If someone were to ask a question on how their problem could be formatted as a regression, then I would argue that it's perfectly relevant to this SE. Technically, linear regression alone is one of the simplest forms of machine learning. Now, if you were to ask to prove the bounding condtions of certain types of optimizations under purely theoretical conditions, it may not be as relevant to this SE as cross validated for example.
In short, it depends on how the question is asked. If it deals more with the application side of AI, then yes. I think it is perfectly reasonable to ask on the AI SE.
Upvotes: 1
|
2019/02/05
| 464
| 1,945
|
<issue_start>username_0: I came across this [question](https://ai.stackexchange.com/questions/10396/artificial-intelligence-and-its-unlawful-use). Clearly the question is badly formatted, but besides that its speculative and based on overhyped information about AI.
Are my conclusions on this question wrong? If not how can we make beginners in AI aware about the amount of false information floating around(Basically I am asking what is the remedial measures to such questions keeping in mind we don't want to scare away new visitors)?<issue_comment>username_1: First, badly formatted posts should be fixed; that is what the wiki-style editing is for.
But that specific question should be closed as `primarily opinion-based` because it is soliciting arguments and debate centered around a vague premise built on a hypothetical future which does not currently exist. Please don't let this site become [Worldbuilding](https://worldbuilding.stackexchange.com/help/on-topic). It is not a good fit for this site.
But to answer your question more generally, if the premise of a question is wrong or misleading — whether by misunderstanding or pop culture hype — you should answer in a way that dispels the mistaken belief. Head off the incorrect information or assumption with a cohesive answer explaining the issue correctly.
Folks around the Internet are searching for this (mis)information wherever they can find it. It would nice if they landed ***here*** to straighten out the issue authoritatively.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There's actually a great deal of debate about this subject in general in the wider AI community (ethics re: implementation of AI).
That said, the question is poorly worded, and overly focuses on the proffered scenario, as opposed to the underlying general issue.
I've retagged (ethics, social, legal) and have provisionally closed the question, pending clarification.
Upvotes: 2
|
2019/02/15
| 764
| 2,809
|
<issue_start>username_0: I asked [**ai.stack**exchange.com/questions/10539/should-facebook-safety-check-work-if-an-account-is-stuck-at-a-name-change-checkp](https://ai.stackexchange.com/questions/10539/should-facebook-safety-check-work-if-an-account-is-stuck-at-a-name-change-checkp) after [**webapps.stack**exchange.com/questions/103217/will-facebook-safety-check-work-if-my-account-is-stuck-at-a-checkpoint-page](https://webapps.stackexchange.com/questions/103217/will-facebook-safety-check-work-if-my-account-is-stuck-at-a-checkpoint-page) and each time the reason is that question does not seem to fit, so the asker is personally verbally antagonized because of repeating the specific grammar/[SEO](https://en.wikipedia.org/wiki/Search_engine_optimization) of Mark's (@zuck's) exact dictated verbatim use of code/language?
To quote and repeat their apparently questionable (judging by the quick *solved-same-day-every-time-nobody-else-could-help-better-if-they-read-too* moderator responses of Stack's encouragement) wording again, *"Zuckerberg didn’t seem to have any specifics, but he went out of his way to tell me he thought **artificial intelligence** was going to play a big role in identifying moments of crisis on the network."* from <https://www.wired.com/2016/11/facebook-disaster-response> an AI/computer site.
I asked about the ***fact*** of if a user still gets safety notifications.<issue_comment>username_1: First, badly formatted posts should be fixed; that is what the wiki-style editing is for.
But that specific question should be closed as `primarily opinion-based` because it is soliciting arguments and debate centered around a vague premise built on a hypothetical future which does not currently exist. Please don't let this site become [Worldbuilding](https://worldbuilding.stackexchange.com/help/on-topic). It is not a good fit for this site.
But to answer your question more generally, if the premise of a question is wrong or misleading — whether by misunderstanding or pop culture hype — you should answer in a way that dispels the mistaken belief. Head off the incorrect information or assumption with a cohesive answer explaining the issue correctly.
Folks around the Internet are searching for this (mis)information wherever they can find it. It would nice if they landed ***here*** to straighten out the issue authoritatively.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There's actually a great deal of debate about this subject in general in the wider AI community (ethics re: implementation of AI).
That said, the question is poorly worded, and overly focuses on the proffered scenario, as opposed to the underlying general issue.
I've retagged (ethics, social, legal) and have provisionally closed the question, pending clarification.
Upvotes: 2
|
2019/03/29
| 1,384
| 5,285
|
<issue_start>username_0: I think this website is good for:
1. Clarifying/explaining/discussing theoretical AI concepts (including concepts described in AI research papers, books, etc.), notation and terminology
2. Discussing philosophical issues related to AI (risks, safety, AGI, super-intelligence, etc)
3. Discussing the history (e.g. AI winters) and the future of AI and how it relates to other fields
Based on all topics described in the box on the right side of the [AI Wikipedia page](https://en.wikipedia.org/wiki/Artificial_intelligence), theoretical AI concepts/goals comprise:
* Knowledge reasoning
* Machine learning
+ Reinforcement learning
+ Supervised learning
+ Unsupervised learning
+ Online learning
+ Continual, lifelong or incremental learning
+ Active learning
+ ...
* Planning
* Natural language processing
* Computer vision
* Robotics
* AGI
The approaches are
* Symbolic (GOFAI)
* Deep learning
* Bayesian networks
* Causal inference
* Evolutionary algorithms
+ genetic algorithms
* Swarm intelligence
+ Ant colony optimization algorithms
+ Artificial bee colony algorithm
+ Particle swarm optimization
* ...
Some philosophical and social issues include:
* Ethics
* Existential risk
* AI tests
* Definitions of AI
* Chinese room
* Weak vs strong AI
* Super-intelligence
* Friendly AI
* Emotional AI
* Explainable AI
* ...
Currently, the [Help Center](https://ai.stackexchange.com/help/on-topic) does not explicitly state that these topics are suited for this website, but I think it should. I think we should clarify which topics are on-topic here. In general, we can use the linked Wikipedia page to help us clarify which topics are suited for the website.
Furthermore, I would say that every implementation-related question should always be considered off-topic here, given that there's already Stack Overflow (and Data Science SE) for this. Which other topics are off-topic here? Should we also be more strict regarding primarily opinion-based questions? I think so, but given that philosophical questions are allowed here, we need to be careful when defining the borderline.
What about hardware questions related to neuromorphic chips? We actually have a [neuromorphic-engineering](https://ai.stackexchange.com/questions/tagged/neuromorphic-engineering "show questions tagged 'neuromorphic-engineering'") tag. If they are about theoretical properties and not implementation issues, can they be considered on-topic?
Furthermore, it would be useful if every new user was "forced" to read this on and off-topic pages (before posting a new question), to avoid them to post off-topic questions. It would also be useful to have an automatic way to guide them to the more appropriate website, in those cases. Is this possible to do?
We should spend a few paragraphs to describe our community to new users and how it is different from (or similar to) other communities (in particular, Data Science SE, Cross Validated SE and Stack Overflow).
Several related questions have been asked in the past
* [What topics can I ask about here?](https://ai.meta.stackexchange.com/q/1252/2444)
* [Is it time to modify our site guidelines?](https://ai.meta.stackexchange.com/q/1356/2444)
* [How to distinguish between 'programming' and 'conceptual' questions?](https://ai.meta.stackexchange.com/q/1215/2444)
* [Technical questions are not getting closed](https://ai.meta.stackexchange.com/q/1279/2444)
* [Why aren't implementation based questions welcome on this stack?](https://ai.meta.stackexchange.com/q/1559/2444)
* [What is in scope under the "implementation of machine learning" exclusion?](https://ai.meta.stackexchange.com/q/1287/2444)
* [What kind of implementation questions should be off-topic?](https://ai.meta.stackexchange.com/q/1081/2444)
* [What should the AI.SE Site Description be?](https://ai.meta.stackexchange.com/q/1430/2444)<issue_comment>username_1: First, badly formatted posts should be fixed; that is what the wiki-style editing is for.
But that specific question should be closed as `primarily opinion-based` because it is soliciting arguments and debate centered around a vague premise built on a hypothetical future which does not currently exist. Please don't let this site become [Worldbuilding](https://worldbuilding.stackexchange.com/help/on-topic). It is not a good fit for this site.
But to answer your question more generally, if the premise of a question is wrong or misleading — whether by misunderstanding or pop culture hype — you should answer in a way that dispels the mistaken belief. Head off the incorrect information or assumption with a cohesive answer explaining the issue correctly.
Folks around the Internet are searching for this (mis)information wherever they can find it. It would nice if they landed ***here*** to straighten out the issue authoritatively.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There's actually a great deal of debate about this subject in general in the wider AI community (ethics re: implementation of AI).
That said, the question is poorly worded, and overly focuses on the proffered scenario, as opposed to the underlying general issue.
I've retagged (ethics, social, legal) and have provisionally closed the question, pending clarification.
Upvotes: 2
|
2019/05/23
| 478
| 1,958
|
<issue_start>username_0: Opening this Meta to solicit opinions. (I'll put my own in a separate answer.)
* Is referencing a Quora question materially different than referencing one's blog?
* Should this be addressed by voting, where the answer is judged on the strenght of the content, as opposed to the source?<issue_comment>username_1: My feeling is this should be addressed by voting since Quora is no more commercial than Stack (limited ads,) thus such links don't constitute spam.
I do see Stack & Quora in competition, although I hope Stack will ultimately prevail in terms of search rankings. (US Alexa ranking for Stack is 115 worldwide and 65 in the US, vs. 78/47, so we're not quite there yet.)
But, in some sense, both sites have the same mission, if Stack seems to to a better job because of our voting system.
Upvotes: 2 <issue_comment>username_2: I think it's fine, right? As long as it's not just a link, but there is some explanation surrounding it. Referencing things that have been written elsewhere seems to be very much preferable to... copying without attribution?
Sometimes in my answers I'll reference papers which I'm an author on myself. I don't think that's really different in any tangible way?
Upvotes: 2 <issue_comment>username_3: I raised the flag, since it was the second time OP posted link to quora answer written by the OP. IMO referring to one's blog is ok, but referring frequently is not ok. Also it is not much hard to copy paste from the blog and at the end attribute it to the blog (both the answers did not involve any technical or Math details), so it does not make sense not to do it.
Also since the user was new I did not want to comment wrongly on what's accepted and what's not in this stack, so I just thought moderators will do a better job.
Upvotes: 2 <issue_comment>username_4: Quora has login popup which often prevent reading answer. In my opinion it's definitely not OK to reference Quora.
Upvotes: 1
|
2019/05/27
| 358
| 1,366
|
<issue_start>username_0: A new tag `ant-colony` has been introduced. I see 2 problems here:
* The name should have been `ant-colony-optimization`
* The tag `swarm-intelligence` should already cover ant colony optmization techniques, so it does not make much sense for a new tag.
What are your thoughts?<issue_comment>username_1: I introduced this tag because ACO is a well developed sub-field of swarm intelligence, so it deserves (IHMO) its own tag, like e.g. reinforcement learning deserves its own tag (compared to machine learning) on a website dedicated to AI.
I used [ant-colony](https://ai.stackexchange.com/questions/tagged/ant-colony "show questions tagged 'ant-colony'") because it is shorter and there's no ambiguity in the field of AI. Furthermore, a lot of people do not refer to these algorithms as "ACO", but e.g. as "ant colony system" or just "ant colony algorithms". I would argue that [ant-colony](https://ai.stackexchange.com/questions/tagged/ant-colony "show questions tagged 'ant-colony'") is a more general tag and expression.
Furthermore, several questions on ACO have already been asked on the website.
Upvotes: 1 <issue_comment>username_2: Wondering if we should maybe just have a general tag for "ant-intelligence" that could cover all aspects, including swarm intelligence, engineering (tunnel building) and path finding..
Upvotes: 0
|
2019/10/27
| 585
| 2,440
|
<issue_start>username_0: *I'm aware that this topic has been discussed before, and didn't have much support, but I think it's worth at least a revisit, even only if to confirm the current status.*
* Is career advice be off-topic if related to academic pursuits re: professional opportunities?
Here specifically questions such as "What classes to take to get this job?"
If the above question is on-topic, is it worth considering allowing professional advice in general?
Questions might involve sub-fields that are hot at any given time, industry trends, interview process (what to expect), etc. Types of roles that exist in organizations, and potentially even pay-scales.
1. This community is made up of people studying and working in the AI field, in the private sector and academia. Others may have recently gone through the interview process. This constitutes a cluster with field specific knowledge, as opposed to the stacks that deal with this in general.
2. Broadening the scope could be helpful in attracting new users, who might subsequently contribute. (My own participation on Stack in general is a product of having gotten some info I needed several years ago.)
3. All questions and answers are dated so visitors can see how current the information is.
We are a general AI community, so I think this subject is potentially in scope, and could expand our utility.<issue_comment>username_1: I introduced this tag because ACO is a well developed sub-field of swarm intelligence, so it deserves (IHMO) its own tag, like e.g. reinforcement learning deserves its own tag (compared to machine learning) on a website dedicated to AI.
I used [ant-colony](https://ai.stackexchange.com/questions/tagged/ant-colony "show questions tagged 'ant-colony'") because it is shorter and there's no ambiguity in the field of AI. Furthermore, a lot of people do not refer to these algorithms as "ACO", but e.g. as "ant colony system" or just "ant colony algorithms". I would argue that [ant-colony](https://ai.stackexchange.com/questions/tagged/ant-colony "show questions tagged 'ant-colony'") is a more general tag and expression.
Furthermore, several questions on ACO have already been asked on the website.
Upvotes: 1 <issue_comment>username_2: Wondering if we should maybe just have a general tag for "ant-intelligence" that could cover all aspects, including swarm intelligence, engineering (tunnel building) and path finding..
Upvotes: 0
|
2019/11/03
| 1,980
| 7,865
|
<issue_start>username_0: The topics of our website highly overlap with the topics of CrossValidated and Data Science, but also with the topics of [Computer Science SE](https://cs.stackexchange.com), Stack Overflow and [Philosophy SE](https://philosophy.stackexchange.com) (in fact, they even have [an AI tag](https://philosophy.stackexchange.com/questions/tagged/artificial-intelligence) with currently 145 questions, while we barely have more philosophical questions, 169).
The differences between our site, CrossValidated and Data Science seem to be the focus, the users and their background, and certain topics. I think that a new and growing website, like ours, is attractive to certain people (including me) because it may represent an opportunity to show their abilities to others and maybe rule the website, while, in websites like CrossValidated, where there are already many established users, this may be more difficult. *But does it really make sense to have all these separate websites (especially, CrossValidated, Data Science and ours), only because of these small differences?*
It may happen that users on one of these sites may not be able to (properly) answer a question on their own website, but users on other related sites may be able to answer such a question. In those cases, the asker may not receive the help that, in theory, is available, but not directly accessible.
In order to understand if our website deserves to live, I think we need to enumerate the topics and goals of our website that really differentiate (or not) us from the other websites. Maybe we should really focus on the topics that differentiate us from the other websites. What do you think?
### Topics
Here's a preliminary list of such topics (I am using the tags below only to emphasize that these are on-topic here, but I am referring to the topics)
* [swarm-intelligence](https://ai.stackexchange.com/questions/tagged/swarm-intelligence "show questions tagged 'swarm-intelligence'")
+ [ant-colony](https://ai.stackexchange.com/questions/tagged/ant-colony "show questions tagged 'ant-colony'")
* [symbolic-ai](https://ai.stackexchange.com/questions/tagged/symbolic-ai "show questions tagged 'symbolic-ai'") (which is a synonym for [gofai](https://ai.stackexchange.com/questions/tagged/gofai "show questions tagged 'gofai'"))
+ [expert-system](https://ai.stackexchange.com/questions/tagged/expert-system "show questions tagged 'expert-system'") (maybe also on topic on [Computer Science SE](https://cs.stackexchange.com), given they have a tag for this)
* [social](https://ai.stackexchange.com/questions/tagged/social "show questions tagged 'social'")
* [agi](https://ai.stackexchange.com/questions/tagged/agi "show questions tagged 'agi'"), [strong-ai](https://ai.stackexchange.com/questions/tagged/strong-ai "show questions tagged 'strong-ai'"), [weak-ai](https://ai.stackexchange.com/questions/tagged/weak-ai "show questions tagged 'weak-ai'")
* [history](https://ai.stackexchange.com/questions/tagged/history "show questions tagged 'history'") (history of AI)
+ [ai-winter](https://ai.stackexchange.com/questions/tagged/ai-winter "show questions tagged 'ai-winter'")
For completeness, maybe we should also list the topics that are on-topic both here and on the other sites.
* [machine-learning](https://ai.stackexchange.com/questions/tagged/machine-learning "show questions tagged 'machine-learning'") (on topic at CrossValidated and Data Science SE)
+ [deep-learning](https://ai.stackexchange.com/questions/tagged/deep-learning "show questions tagged 'deep-learning'") and [neural-networks](https://ai.stackexchange.com/questions/tagged/neural-networks "show questions tagged 'neural-networks'")
+ [reinforcement-learning](https://ai.stackexchange.com/questions/tagged/reinforcement-learning "show questions tagged 'reinforcement-learning'")
+ [evolutionary-algorithms](https://ai.stackexchange.com/questions/tagged/evolutionary-algorithms "show questions tagged 'evolutionary-algorithms'")
* [philosophy](https://ai.stackexchange.com/questions/tagged/philosophy "show questions tagged 'philosophy'") (on topic at [Philosophy SE](https://philosophy.stackexchange.com))
* [search](https://ai.stackexchange.com/questions/tagged/search "show questions tagged 'search'") (on topic at [Computer Science SE](https://cs.stackexchange.com) and maybe Data Science SE)
Feel free to add more topics and goals that distinguish (or not) us from especially CrossValidated and Data Science.<issue_comment>username_1: AI has always been an interdisciplinary field. It therefore should not surprise us that AI.SE's content overlaps with that of other established stacks. I think this is essentially okay.
Perhaps as an analogy: The SoftwareEngineering.SE allows programming questions, but not of the same flavor as the StackOverflow main site. If you want to know *how to do X in language Y*, you visit StackOverflow. If you want to know *whether to do X using language Y*, you are better off asking on SoftwareEngineering.SE
If you want to know *how* to train a deep neural network in Python, you should visit DataScience.SE. If you want to know *whether* to train a deep neural network in Python (or, use any of the other various approaches in AI), you should visit AI.SE.
I think this means that a tag-based approach is the wrong one. We are likely to have questions that are about, say, statistical learning theory. This is part of AI. It is *maybe* part of Data Science, but I'd say it's a stretch. It is *maybe* part of statistics, but certainly not conventional statistics. It is definitely part of AI, and has been a core part for decades. Nonetheless, it encapsulates topics like support vector machines that *are* widely used in Data Science. We, therefore, oughtn't to outlaw the SVM tag. I think the same kind of argument can be used for most or all duplicate tags.
I'm especially concerned to see the `machine-learning` tag highlighted in the duplicates. Modern AI without machine learning is... not much.
I think if we focused only on the tags that are not present on other websites, we will not be able to claim to be about AI, and the site would (and perhaps should) then cease to exist. I think we'll do much better if we instead focus on claiming the *why* space.
Upvotes: 2 <issue_comment>username_2: >
> Does it really make sense to have all these separate websites (especially, CrossValidated, Data Science and ours), only because of these small differences
>
>
>
No, it doesn't make any sense, because whatever people are saying on meta, in practice if you look at the questions posted on AI.SE, over 90% of them are on-topic on CrossValidated and Data Science. This creates plenty of crossnetwork question duplicates, which personally kills my motivation to participate.
Upvotes: 2 <issue_comment>username_3: * We can address futurism (one of the leading drivers of misinformation about AI!) and serve an important function of myth-busting.
* We deal with social impacts in general, which other related stacks don't address.
* We can take pyschology/cognitive/neuroscience questions related to AI, which may unwelcome on those stacks.
* We can treate AI milestones in general, not just those related to statistical AI.
re: Philosophy, although we have few questions formally containing that tag, [a search of the most voted SE:AI questions](https://ai.stackexchange.com/questions?tab=Votes) reveals the subject to be popular and well-treated on this Stack. (Compare to the [relative lack of activity for AI questions on SE: Philosophy, especially in recent years](https://philosophy.stackexchange.com/questions/tagged/artificial-intelligence?tab=Newest).) SE:Philosophy also lack a "neoluddism" tag, which is the more relevant philosophy tag, in that it relates to material effects of AI implementation, including bias.
Upvotes: 2
|
2019/11/07
| 555
| 2,258
|
<issue_start>username_0: This community last had moderators appointed in 2017, so it's been a while... In addition to that, you may have noticed that [one of the current mods — <NAME> — is stepping down from their moderator position](https://ai.meta.stackexchange.com/q/1601/45).
Since moderators were last appointed in this community, we've started and "graduated" an experiment: and pro-tem moderators [are now elected](https://meta.stackexchange.com/q/332180/208518), just like "regular" moderators. As such, to find a replacement for Ben, we're looking at scheduling an election to start somewhere in January 2020. To avoid finding ourselves in a situation where an election would fail due to an insufficient number of candidates, though, I'm posting this to try to assess the community members' willingness to step up and nominate themselves, when the actual election's nomination period starts.
Please leave an answer if you'd be willing to run for a moderator position, should we decide to run an election. Like I mentioned, we're looking at scheduling the nomination period to start some time in January '20.
**NOTE:** This is not an official election nomination thread, just a "pulse check" to get a notion of how many people here would be willing to step up, so you don't have to put up your whole election nomination.<issue_comment>username_1: I'm interested in running for a moderator position.
Upvotes: 3 <issue_comment>username_2: Thank you for writing to us about this. This isn’t our final response here, but in the interest of being transparent and keeping lines of communication open,***I myself, have been yearning for this mod - position for a long period of time.***
Therefore, with all my interest and the inner driving force, I'm running for this! I should even be hesitated for it.
Thanks once again for being informed. Keep us posted of any updates.
Upvotes: 0 <issue_comment>username_3: I am happy to run, especially if there is a shortage of candidates, although I think I am not the best candidate for the job.
Upvotes: 3 <issue_comment>username_4: I would be interested in running. Though, I must say that the first two members that came to mind as the top candidates have already stated that they are interested.
Upvotes: 2
|
2019/11/16
| 3,351
| 13,041
|
<issue_start>username_0: Similarly to [What should the AI.SE Site Description be?](https://ai.meta.stackexchange.com/q/1430/2444) and after the discussions [On-topic and off-topic pages need to be clarified](https://ai.meta.stackexchange.com/q/1506/2444) and [Who decides and writes the on-topic and off-topic pages?](https://ai.meta.stackexchange.com/q/1611/2444), I think it is time to vote for a clearer and updated version of the on-topic page, which users (but especially moderators) should **strictly** adhere to.
*You should vote for the answer that proposes the best alternative to the current on-topic page. You can also propose a new on-topic page, if you are not happy with the current proposals.*
After a reasonable consensus is reached, the moderators should update the site descriptions to match the top voted answer.<issue_comment>username_1: What topics can I ask about here?
---------------------------------
If you have a question about **theoretical, philosophical, social, historical**, and certain **developmental** and **academic** aspects of artificial intelligence, then you are *probably* in the right place to ask your question!
Below you can find a *non-exhaustive* list of specific topics that are considered on-topic here. Next to each topic, you have links to other stacks where the corresponding topics may also be on-topic.
### Specific topics
You can ask a question about the **theoretical** aspects of the following sub-fields of artificial intelligence.
* Artificial general intelligence
* Affective computing
* Swarm intelligence
* Evolutionary algorithms ([1](https://stats.stackexchange.com/help/on-topic), [4](https://stackoverflow.com/help/on-topic), [6](https://cs.stackexchange.com/help/on-topic))
* Machine learning ([1](https://stats.stackexchange.com/help/on-topic), [2](https://datascience.stackexchange.com/help/on-topic), [4](https://stackoverflow.com/help/on-topic), [6](https://cs.stackexchange.com/help/on-topic))
* Computational learning theory ([1](https://stats.stackexchange.com/help/on-topic), [6](https://cs.stackexchange.com/help/on-topic), [7](https://cstheory.stackexchange.com/help/on-topic))
* Natural language processing and understanding ([6](https://cs.stackexchange.com/help/on-topic))
* Computer vision ([1](https://stats.stackexchange.com/help/on-topic), [2](https://datascience.stackexchange.com/help/on-topic), [4](https://stackoverflow.com/help/on-topic), [6](https://cs.stackexchange.com/help/on-topic), [10](https://dsp.stackexchange.com/help/on-topic))
* Knowledge representation and reasoning ([6](https://cs.stackexchange.com/help/on-topic))
* Robotics ([5](https://robotics.stackexchange.com/help/on-topic))
* Planning ([6](https://cs.stackexchange.com/help/on-topic))
The following **philosophical** (or theoretical) aspects are on-topic.
* Intelligence definitions and testing
* Superintelligence
* Emotional intelligence
* Artificial consciousness
The following **social** aspects are on-topic.
* Ethics ([3](https://philosophy.stackexchange.com/help/on-topic))
* Explainable artificial intelligence
* Applications
The following **historical** aspects are on-topic.
* Timeline (e.g. AI winters)
* Progress
You can also ask questions about
* Terminology and notation
* Proofs ([8](https://math.stackexchange.com/help/on-topic))
* Clarifications of certain excerpts from papers, books, etc.
* Reference requests (e.g. "Which paper introduced vanilla RNNs?")
### Notes
* Before posting, please, **look around to see if your question has been asked before**. Your question could be closed as a duplicate of another, if you don't do it.
* You should **put some effort into writing your question**. If your question is unclear, it could be flagged as unclear, your question could be closed, and you will not receive help. Furthermore, we expect users to do a little bit of research before asking a question.
* **Ask specific questions**! If your question has potentially many answers, your question may be closed as too broad.
* [**You should try asking one question or address a single problem per post**](https://meta.stackexchange.com/a/39224/287113), unless the questions are really very related to each other. If you ask multiple questions per post, your post may be closed as too broad.
* Ideally, we are looking for **questions that can be answered objectively**. More precisely, do not ask for advice (such as career path recommendation or a tool, which are, in general, **off-topic** here anyway) but for facts (including references) and arguments. If you have a philosophical question, you should demand a logical, rational and reasonable answer that argues the philosophical perspective (and not just an opinion).
* **Implementation questions in the context of understanding the theoretical topics are on-topic**. For example, if a theoretical topic is described by a certain mathematical formula and you want to understand how a certain implementation is related to the formula, then your question is on-topic. As a rule of thumb, if you can describe your problem without the source code and if you think that a solution to your problem can be given without the source code, then your question is on-topic. The source code can be provided to further clarify the issue, but you should provide a [Minimal, Reproducible Example](https://stackoverflow.com/help/minimal-reproducible-example).
* **General programming questions are off-topic**. For example, if you have a question like "Why am I getting this exception?", "How do I merge two Pandas' data frames?" or "How can I use this Keras API?", then your question is off-topic (and you should probably ask it on [Stack Overflow](https://stackoverflow.com/help/on-topic)).
* It's also OK to ask and answer your own question.
### Overlapping Stacks
If your question is not specifically on-topic for Artificial Intelligence Stack Exchange, it may be on-topic for another Stack Exchange site, such as
1. [Cross Validated](https://stats.stackexchange.com/help/on-topic)
2. [Data Science](https://datascience.stackexchange.com/help/on-topic)
3. [Philosophy](https://philosophy.stackexchange.com/help/on-topic)
4. [Stack Overflow](https://stackoverflow.com/help/on-topic)
5. [Robotics](https://robotics.stackexchange.com/help/on-topic)
6. [Computer Science](https://cs.stackexchange.com/help/on-topic)
7. [Theoretical Computer Science](https://cstheory.stackexchange.com/help/on-topic)
8. [Mathematics](https://math.stackexchange.com/help/on-topic)
9. [Psychology & Neuroscience](https://psychology.stackexchange.com/)
10. [Signal Processing](https://dsp.stackexchange.com/help/on-topic)
Certain questions are probably on-topic on multiple of these websites. For example, machine learning questions are also on-topic at [Cross Validated](https://stats.stackexchange.com/help/on-topic), which is more statistics-oriented. There are probably other overlapping sites.
If no site currently exists that will accept your question, you may commit to or propose a new site at [Area 51](https://area51.stackexchange.com), the place where new Stack Exchange communities are democratically created.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I adjusted [@username_1's answer](https://ai.meta.stackexchange.com/a/1616/2444) to remove the parts I thought were too restrictive. AI is a broad field, and the whitelist of "on-topic" areas omits a huge number of topics which are certainly within AI (consider, for contrast, [the topics](https://aaai.org/Conferences/AAAI-19/aaai19keywords/) that are present at AAAI this year alone, all of which are active areas of research). I think that the entry under the "What topics can I ask about here?" is specific enough. If we want to use a list of valid topics, we should formulate it by starting with actual active areas of research for the field, perhaps by amalgamating the keywords and topics that are present at AAAI, NIPS, UAI, IJCAI, AAMAS, CEC, and other major conferences. I suspect that's a lot more work than it's worth however.
I also adjusted the wording of the programming portion to better reflect the idea that programming questions are fundamentally *on-topic* here, as long as they are about AI algorithms or implementations, and not applications. I think that without this, the stack is going to lack a connection to academic AI, and will descend into a sort of futurism/singularity board. We want to encourage more programming related content, not less, but only of the kind that actually relates to AI.
What topics can I ask about here?
---------------------------------
If you have a question about **theoretical, philosophical, historical, social** and **algorithmic** or **academic** aspects of AI, then you are *probably* in the right place to ask your question!
### Notes
* Before posting, please, **look around to see if your question has been asked before**. Your question could be closed as a duplicate of another, if you don't do it.
* You should **put some effort into writing your question**. If your question is unclear, it could be flagged as unclear, your question could be closed, and you will not receive help. Furthermore, we expect users to do a little bit of research before asking a question.
* **Ask specific questions**! If your question has potentially many answers, your question may be closed as too broad.
* [**You should try asking one question per post**](https://meta.stackexchange.com/a/39224/287113), unless the questions are really very related to each other. If you ask multiple questions per post, your post may be closed as too broad.
* Ideally, we are looking for **questions that can be answered objectively**. More precisely, do not ask for advice (such as career path recommendation or a preferred tool, which are, in general, **off-topic** here anyway) but for facts (including references) and arguments. If you have a philosophical question, you should demand a logical, rational and reasonable answer that argues the philosophical perspective (and not just an opinion).
* It's also OK to ask and answer your own question.
* Programming questions about the implementation of AI algorithms, or the source code of implementations of those algorithms, are on-topic. **Programming questions about applying AI tools to specific problems are off-topic**, and probably belong on DataScience.SE, or the main StackOverflow site. If you're looking for a clarification of the implementation of a certain AI concept, then your question is on-topic. For example, if a theoretical topic is described by a certain mathematical formula and you want to understand the implementation of this formula, then your question is on-topic. However, if you have a question like "Why am I getting this exception?", "How do I merge two Pandas' data frames?", or "How can I use Tensorflow to train a neural network to recognize cats?" then your question is off-topic (and you should probably ask it on [Stack Overflow](https://stackoverflow.com/help/on-topic)).
Similar websites
----------------
If your question is not on-topic for Artificial Intelligence Stack Exchange, it may be on-topic for another Stack Exchange site, such as
* [Data Science](https://datascience.stackexchange.com/help/on-topic)
* [Cross Validated](https://stats.stackexchange.com/help/on-topic)
* [Stack Overflow](https://stackoverflow.com/help/on-topic)
* [Robotics](https://robotics.stackexchange.com/help/on-topic)
* [Computer Science](https://cs.stackexchange.com/help/on-topic)
* [Philosophy](https://philosophy.stackexchange.com/help/on-topic)
Certain questions are probably on-topic on multiple of these websites. For example, machine learning questions are also on-topic at [Cross Validated](https://stats.stackexchange.com/help/on-topic), which is more statistics-oriented.
If no site currently exists that will accept your question, you may commit to or propose a new site at [Area 51](https://area51.stackexchange.com), the place where new Stack Exchange communities are democratically created.
Upvotes: 0 <issue_comment>username_3: I like all of the suggestions in general, and think it's now just a matter of hammering out details, and dealing with the competing concerns of brevity vs. extrapolation.
**I think we should lift some of the the guidance from Data Science re: Overlap**
>
> Even though the boundaries are not always perfectly clear and we often accept questions that are also appropriate on other sites, here are a few guiding thoughts:
>
>
> If you think a question is equally appropriate on multiple sites, ask on the site with the most users (usually Stack Overflow or Data Science). That way you have the best chance to get good and quick answers and site contents will stay more coherent. If it is not accepted there, it can be migrated to the correct site. Don't post your questions on more than one site.
>
>
> Other relevant sites include:
>
>
> Open Data (Dataset requests)
> Computational Science (Software packages and algorithms in applied mathematics)
> etc.
>
>
>
Upvotes: 1
|
2020/03/03
| 547
| 1,884
|
<issue_start>username_0: Artificial Intelligence's [first moderator election](https://ai.stackexchange.com/election/1) has come to a close, the votes have been tallied and the two new moderators are:
[](https://ai.stackexchange.com/users/2444/nbro) [](https://ai.stackexchange.com/users/16909/john-doucette)
They'll be joining [the existing crew](https://ai.stackexchange.com/users?tab=moderators) shortly—please thank them for volunteering, and share your assistance and advice with them as they learn the ropes!
For details on how the voting played out, you can download the election results [here](https://ai.stackexchange.com/election/1), or [view a summary report online](https://www.opavote.com/results/4701123745153024/0).
---
More hands ended up being needed, so the Community Team reached out to the top runner up in this election — let's welcome them to the team too:
[](https://ai.stackexchange.com/users/1641/nbro)<issue_comment>username_1: I am really flattered by this opportunity and vote of confidence! I will try to do my best, in collaboration with the other moderators and community members! Feel free to ping me in [our main chat room](https://chat.stackexchange.com/rooms/43371/the-singularity) and I will try to answer as soon as possible. Hopefully, our community will continue to grow in quantity but especially in quality!
Upvotes: 3 <issue_comment>username_1: Congratulations to our new moderator, Dennis! He has a very good overall knowledge of the AI field, he's patient, and he's been around for a long time, which shows that he cares about this community. I think he will be a good moderator (if he remains at least as active as he has been)!
Upvotes: 1
|
2020/04/16
| 232
| 786
|
<issue_start>username_0: After [Empty 'SPONSORED BY' label under top bar - AWS logo not shown](https://ai.meta.stackexchange.com/q/1636/2444), the "sponsored by" has completely disappeared from the site. Are we still sponsored by AWS or anyone?<issue_comment>username_1: The "sponsored by" is gone, for now, because the site sponsorship is currently paused.
While we don't have a set date yet for the sponsorship to return, current conversations with AWS point to relaunching in early Q3, though the date isn't locked in yet.
Upvotes: 2 <issue_comment>username_2: Seems AWS came back up in the "Sponsored by" section recently, this is what it looks like right now:
[](https://i.stack.imgur.com/zv8hA.png)
Upvotes: 2
|
2020/05/15
| 313
| 1,159
|
<issue_start>username_0: I came across this [question](https://ai.stackexchange.com/questions/123/does-the-chinese-room-argument-hold-against-ai) about Chinese Room argument. I was reading a book where the author tries a different way to disprove CR argument. My point is the question:
>
> Does the Chinese room argument hold? Can we argue that artificial intelligence is merely clever algorithmics?
>
>
>
can generate opinionated answers based on the field one is expert in. A mathematician might treat the problem in a different way compared to a neurobio/psychology person. Should such questions remain open?<issue_comment>username_1: The "sponsored by" is gone, for now, because the site sponsorship is currently paused.
While we don't have a set date yet for the sponsorship to return, current conversations with AWS point to relaunching in early Q3, though the date isn't locked in yet.
Upvotes: 2 <issue_comment>username_2: Seems AWS came back up in the "Sponsored by" section recently, this is what it looks like right now:
[](https://i.stack.imgur.com/zv8hA.png)
Upvotes: 2
|
2020/07/08
| 325
| 1,384
|
<issue_start>username_0: Can we ask questions about programming questions on Pytorch, TensorFlow, or any deep learning frameworks??<issue_comment>username_1: No.
General programming issues are off-topic here. For example, if you have an exception/bug/error in your source code or you don't know how to use a certain library/API, then that's off-topic. If you have this type of question, the most appropriate site is probably Stack Overflow (or Data Science SE).
However, if you want to understand how a certain concept/algorithm/model is implemented, then you can ask questions about that because that's more a conceptual question. [Here is an example of such a question](https://ai.stackexchange.com/q/20803/2444). (But, please, try to ask a specific and clear question that explains what you don't really understand, so that to facilitate the answerer's life).
[Our on-topic page](https://ai.stackexchange.com/help/on-topic) actually states these things explicitly, so I suggest that you read or at least skim through our on-topic page again.
Upvotes: 2 <issue_comment>username_2: I'm personally in favor of this, but the overall consensus is that we should focus on theory, as opposed to implementation.
(We haven't historically had good response to programming or implementation questions, so the argument for leaving those to overflow and other stacks is strong.)
Upvotes: 0
|
2021/02/02
| 615
| 2,629
|
<issue_start>username_0: Game Theory is a sub-branch of economics. In fact, [Economics Stack Exchange](https://economics.stackexchange.com/questions/tagged/game-theory) has many questions tagged with [game-theory](https://ai.stackexchange.com/questions/tagged/game-theory "show questions tagged 'game-theory'"). We also have some questions tagged with this tag. However, to me, given that I'm not an expert in this topic (I only know about minimax and similar adversarial search algorithms applied to solve games like tic-tac-toe in the context of AI), it's not clear to what extent questions about game theory should be on-topic on Artificial Intelligence SE.
So, I was wondering whether we should explicitly mention "Game Theory" under our topics on [our on-topic page](https://ai.stackexchange.com/help/on-topic). If not, do you think that something related to game theory that is also related to AI (such as "adversarial search") should be *explicitly* mentioned in our on-topic page? Note that our on-topic page already mentions "search".<issue_comment>username_1: I'd say that questions related to `game-theory` can very often end up being on topic on AI.SE, but it probably doesn't need to be explicitly listed as being on-topic in its entirety. Game theory does show up in various areas of AI (not just like minimax for search in games, but also for like any other kinds of multi-agent interactions, and probably some other areas I don't know enough about), so anything related to that should be on-topic in my opinion. I wouldn't explicitly list game theory as a whole as a topic, because if someone really has a complex, pure game theory question outside of any other AI context, they'd probably be better served on an economics or math website.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I personally think all of the game theories can comment at a much more fundamental level than search, because I see it as the root of all rational decision making, which is the basis for intelligence (utility in an action space.)
I don't know that we need an explicit on-topic reference, but, in view of genetic algorithms/evolutionary game theory, I think we might want to consider it...
(Because I'm doing a database program, I'm currently thinking about game theoretic approaches to data storage, structure, and management. I don't know if it will be fruitful, but it will be interesting!)
I'd want to specify that we're interested in game theory in regard to things like search, genetic algorithms, and rational agents, not specific real world economic questions like those explored by think tanks.
Upvotes: 0
|
2021/05/08
| 861
| 3,537
|
<issue_start>username_0: I am dividing the answers for the questions asked by me into two types.
**One type is** the answers that can be validated by me easily. It can be either due to my knowledge or exposure or the authentic quotes provided by them from textbooks or research papers. I can accept the answer and I can conclude the answer is correct.
**Another type is** the answers that cannot be validated by me. In such cases, I generally treat it as correct based on two factors: up-votes gained by the answer, reputation of the user.
But, some questions are answered by new users of small reputation. In such cases, how should I proceed? Is it okay to go by up-votes or is there any need to enforce the restriction of quoting from authentic material or providing references for the answers?<issue_comment>username_1: >
> Is it okay treat the number of upvotes as correctness?
>
>
>
Sometimes, there isn't a *single correct* answer to a question, so [upvotes may simply indicate the number of people that **liked** or **agreed with** with the contents of the answer](https://ai.stackexchange.com/help/privileges/vote-up).
We also have the case of a user with a lot of reputation (I cannot directly say who, but some people will know who I am talking about), but their answers are unconventional (e.g. provocative, long, contain unnecessary details, etc.) and sometimes are wrong. I'm not fully sure why this user gained so much reputation, but I believe that many inexperienced users upvoted their answers (and this user was also involved in some *voting irregularities* with multiple [sock puppet accounts](https://en.wikipedia.org/wiki/Sock_puppet_account)).
So, upvotes do not necessarily mean that the answer is *generally correct* because
1. there may be not a single correct answer to the question
2. upvotes may have been given by inexperienced users (who blindly upvoted the answer by thinking it's useful, but without knowing whether the information is really correct or not)
So, when reading an answer, you should take into account the upvotes, given that they represent the **consensus**, but maybe you should also **consult another source of information** that confirms the contents of the answer.
[I've been encouraging people not only to upvote but also to downvote content that is not good enough](https://ai.meta.stackexchange.com/q/1660/2444). The higher the number of upvotes, the higher the consensus, but, especially in our community, the number of upvotes can sometimes be misleading, given that we do not have a big number of experts, so many votes are cast by inexperienced users. You can see the voters [here](https://ai.stackexchange.com/users?tab=Voters&filter=all). You should also not blindly trust a user only because he has a high reputation. Of course, users with high reputation are more likely to be "correct", but, especially when you're not familiar with the topic, as I said, you should consult another source of information, ask for more details, ask for references, etc.
Upvotes: 3 <issue_comment>username_2: I see the main purpose of Stack as vetting information. Q&A, which is the main purpose, is subordinate to the purpose of vetting information, in that Q&A has little value when information is not sufficiently challenged.
(This likely explains why Stack is so robust, where reddit and Quora continue to be problematic.)
Ultimately, users need to read and digest the answers, and vet the citations where relevant, to determine the reliability of a given answer.
Upvotes: 2
|
2021/06/03
| 1,650
| 6,364
|
<issue_start>username_0: One can see that there are two closed proposals of AI on area 51: [10 years](https://area51.stackexchange.com/proposals/6607/artificial-intelligence) ago, [7 years](https://area51.stackexchange.com/proposals/57719/artificial-intelligence).
I Joined in our main site an year back. I asked only [25 questions](https://ai.stackexchange.com/users/18758/hanugm?tab=questions&sort=newest) and answered no questions. Because, the answers by expert or senior users are very well drafted and it may take sometime for me to study the topics in detail to answer.
I can experience a consistent support from [nbro♦](https://ai.stackexchange.com/users/2444/nbro). And our main site has experts from several domain. For example, I believe that [Neil Slater](https://ai.stackexchange.com/users/1847/neil-slater) is a domain expert from reinforcement learning (since he answered for RL questions by me and several others).
I am getting great answers from several other users as well. But, I have a small concern regarding the moderation and **activity** by experts in our main site and feeling slight dissatisfaction.
Very few senior/expert members are providing answers, upvoting, editing etc., It may discourage new users to join and contribute as well.
My question: **Why do many (domain) expert users remain silent or calm?**
I feel that our main site can achieve really great stature if such expert users are not silent.
I am guessing the (any or some) following reasons:
1. They are contributing to other related sites like Cross validation, Data sciences, Stack overflow, Computer science.;
2. They are not impressed by the quality of questions asked by the users;
3. They are busy with their professional or personal activities;
Are my reasons true? If not, what may be the reason for such inactivity? Or am I going wrong anywhere?<issue_comment>username_1: You give three good reasons why many experienced users here are not very active or as active as you and I would like. Unfortunately, this is a problem that exists for a long time. There aren't and there haven't been many active experts on this site (from my perspective, without looking into the details/statistics, only 2-3 users regularly answer questions and almost nobody cares about editing posts to clarify them, although I've observed a small improvement in this area in the last weeks).
In my case, I am busier now than in the past two years because of my professional activity, so my activity on the site has gone down a little bit, although I try to visit the site every day and even several times a day, but, unfortunately, I am not able to edit as many posts as I used to in the past and to provide answers regularly, which sometimes require a little bit of time and effort. I've actually stopped contributing to other Stack Exchange sites in order to focus on this site, which, as you also noticed, requires more experts, in order to make new users more engaged.
I think that we haven't yet attracted many experts because experts in an AI topic are typically very busy (solving their problems in research, academia or industry), so typically you will not find many people that are doing active serious research on an AI topic here.
To attract more experts, we could advertise our site and talk about this site to people that are interested in AI. However, to attract more experts, I think it's very important to keep the quality of the site high. So, for instance, if you see a bad question/answer, you should downvote it. If you see a post that is not written clearly, either downvote it or, if you have time, edit it to improve its clarity and structure. I've been trying to do this for a long time. I've noticed a little bit of progress in terms of people providing answers, but not as much as I would like: a new user comes for a few weeks or months, another one goes for many months or forever. So, if you really believe in this site, as I do, stay around and try to [help the community in whatever way you can](https://ai.meta.stackexchange.com/q/1686/2444) :)
Upvotes: 3 <issue_comment>username_2: >
> My question: Why do many (domain) expert users remain silent or calm?
>
>
>
1. SE value doesn't our time given than [AI, CV and DS are ~90% duplicate of one another](https://ai.meta.stackexchange.com/q/4/4).
2. SE value doesn't our content: [roomba](https://meta.stackexchange.com/q/355097/178179), [deletions](https://meta.stackexchange.com/q/269392/178179), [unexplained closures](https://meta.stackexchange.com/q/359132/178179), [unexplained downvotes](https://meta.stackexchange.com/q/355329/178179), etc.
My question: Why contribute in that context?
Upvotes: -1 <issue_comment>username_3: I think I can only really speak for myself, but for me it's very much simply just this one:
>
> 3. They are busy with their professional or personal activities;
>
>
>
I used to be much more active a while ago than I am now, and that decline in activity is very much solely because my energy goes towards other stuff. For me, I think it's also simply something that comes in... waves? Sometimes there's a period of time where I'm in the mood for spending a significant amount of my spare time on stackexchange sites, and sometimes I'm not. That's not due to anything about the site itself though.
For me, it's certainly not about preferring other stackexchange sites. In fact, usually when I'm active, I tend to be active on multiple of those sites at the same time. And when I'm not, I'm inactive in all of them at the same time.
Upvotes: 2 <issue_comment>username_4: I find the types of questions on the AI SE are often hard to answer. A lot of questions are about the latest ML/DL fad, where someone tries something out and gets stuck. And I don't have the time to wade through pages of error messages to diagnose something. As I do that in my job already, I also don't feel inclined to do that on here.
The questions I try to answer are mostly about 'old school' AI/NLP stuff, which has somewhat gone out of fashion in the public eye, but is still widely used in actual applications (because it works). But there are fewer and fewer of them.
At least it's not as bad as on some of the language SEs, where grammar fanatics downvote anything that doesn't comply with their favourite grammar rules they learned in school decades ago...
Upvotes: 2
|
2021/06/11
| 855
| 3,584
|
<issue_start>username_0: I got a doubt while reading a textbook yesterday and I [asked a question](https://ai.stackexchange.com/questions/28166/what-is-the-exact-difference-between-distributional-semantics-and-distributed-se) on the same day.
Today I got some insights regarding it after reading whole unit. But, I am not fully confident whether I am correct.
Anyway, i tried to provide [an answer](https://ai.stackexchange.com/a/28196/18758) according to my own insights. And kept on the top of the answer: *I am writing the answer according to my current understanding*
Should I keep that as banner or is it okay to provide my own interpretation as an answer and then react accordingly based on comments, down-votes to the answer?<issue_comment>username_1: I'm not sure if there are any... "official" rules for something like this on the site. Probably not. Personally, I think it's always useful to explicitly mention when you're not sure or confident about something though. The more specific you can make this, the better. If I'm sure about a large part of my answer, but not about a small detail somewhere, I'd explicitly mention it there. If you can also concretely describe **why** you're not sure about something, or which assumptions you make (that may be wrong) that lead you to whatever conclusion you're not 100% sure about, even better!
Upvotes: 3 <issue_comment>username_2: In addition to what was written in the other answer, which I agree with (so, generally, if you think you're unsure about something you should inform the reader, as misinformation can hurt the readers), I think it's important to note that, ideally, people that are also familiar with the topic or know whether your answer is correct or not are expected to upvote or downvote your answer (depending on whether it's correct or incorrect) or leave a comment.
So, even if you didn't leave that disclaimer, in my view, incorrect answers should be downvoted, independently of whether the OP will fix the mistakes later or not (note that I didn't even read your answer and I don't really know if it's correct or not, but this is a general suggestion). Downvoting should not be used to personally attack someone, but it's our tool to determine what is good/correct/useless or bad/incorrect/useful content, but, unfortunately, not all people understand this.
Upvotes: 2 <issue_comment>username_3: I think it depends also partly on the question. If it's a formal question, or a hard science question, with a precise, provable answer, disclaimer would be good, and acceptance of downvotes if other's deem it incorrect. (OK to remove also.)
For fuzzy questions, which I often answer a lot of on other stacks, I'll directly say "this is my guess" before I make a supportable, well reasoned argument, because there is no source that has answered it, or there may even be differences of opinion.
("What is the difference between a sword and a knife" on martial arts, as an example. I've never been able to find an answer, but now that I've studied them sufficiently, significantly more than most people, I'm confident I can answer it, and the correctness of that answer will be validated by the argument.)
The humanities stacks, such as Literature, don't often have objective answers, and are more about providing analysis which may benefit students and scholars. Sometimes an OP will accept the one they most agree with, but, in those cases, all well reasoned answers are valuable.
We have some fuzziness in the case of philosophy and social aspects, but those questions are more rare here.
Upvotes: 1
|
2021/06/22
| 840
| 3,415
|
<issue_start>username_0: Artificial intelligence is developing at rapid level and there are several experts around the world. Coming to academics, there are large number of professors available compared to last decade. Artificial intelligence also became a compulsory course in developing countries.
Although I asked some questions on [Data Sciences](https://datascience.stackexchange.com/users/47826/hanugm?tab=questions) and [Cross Validation](https://stats.stackexchange.com/users/42883/hanugm?tab=questions) stack exchange sites. I *personally* feel that our main site is less in no way and I can opine that our main site has better chances to contribute in a better way than other sites. I feel that our site is mostly visited by the beginners or intermediates like me.
Beginners generally tend to ask more and more questions.
So, there is a great need to increase in the critical mass of the experts. Else, in long run, beginners may feel either hopeless or neglected. It may in turn decline our site progress.
In order to not make it happen, along with the efforts to make the experts on our main site active, we need to attract the new expert users that are either unaware about our site or contributing on other sites etc.,
What kind of activities I(any one) can do in order to attract the new experts to our main site?<issue_comment>username_1: I agree with your points. This is a question that I've been asking myself for a long time, but I don't have a definitive answer/solution. Some potential solutions are
1. Advertise our website (but not sure how and where); I've tried to do this sporadically and not very seriously (e.g. by pasting links on other chats, but this actually led to some hot debates between me and others, so I've stopped doing this)
2. Talk about this site to your friends/colleagues/classmates (when you have the occasion)
3. Mention the name of this site in events like conferences or workshops
We should highlight the strengths of our site and we should especially try to attract users that are interested in those strengths and topics. We tend to attract several users interested in reinforcement learning (which is very nice, given that RL is very central to AI), but, unfortunately, we do not attract many qualified users in other areas, like the philosophy of AI, cognitive architectures, AGI, evolutionary computation or even just the regular machine learning topics.
Upvotes: 2 <issue_comment>username_2: I don't have the time/focus/energy to do this, but, if I did, I would spam social media sites with the best Q&As.
It's blows my mind, but people actually got to places like Quora and reddit for information. I'm not saying you can't occasionally finds pearls hidden under all the garbage, but it's rarely worth the effort.
Users on other stacks used to claim we had no function, and should be subsumed into overlapping stack. We disagreed. **Strongly.**
Our scope, even narrowed, is still one of the widest—our field overlaps pretty much every field in some way. The related stacks could never deal with this potential depth.
We can take questions on the social impacts and history and etymology, in addition to the mathematical treatment of AI theory. We can discuss the philosophy, not just the practice.
Can you talk about Searle or Minsky and their ideas on those other stack? Can you ask what was the first AI? Or about what AI means? Nope.
Upvotes: 1
|
2021/06/29
| 914
| 3,671
|
<issue_start>username_0: In any domain of knowledge, we use many **words and terms** to describe the concepts, techniques, etc., related to that field. Afaik, terminology deals with words to be used in a context and their corresponding meaning.
Etymology is fundamentally different from Terminology. In general, terminology doesn't address the origin of the word in a particular context.
Some words carry information related to the phenomenon under consideration. For suppose the term *gradient descent* contains the gist of the algorithm. There may be many terms that are unrelated and may be of historical origins, such as the name of a scientist or some other thing.
If a user is confused about the words or terms or even phrases that are used in AI, and asks about the origin or the intention of selecting a particular term, then is it on-topic?
---
**Example questions**:
Why do we use the word `gram` in **n-gram**?
Why logistic regression is called so?<issue_comment>username_1: I am in favour of this type of question being on-topic here. In particular, it seems that those questions would be at the intersection of "history of AI" (which is on-topic here) and "terminology" (in the sense that you want to know something about terms). We have had some of those question in the past. I remember one that I actually provided an answer to which was never closed as off-topic and I had never considered it as such: [Why is it called back-propagation?](https://ai.stackexchange.com/q/20044/2444).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Absolutely! This is an area where I have formal training, and can legitimate go back to proto-indo-european. (But people typically don't like my analyses b/c they conflict with their own.)
If you see any unanswered terminological question, or are interested in the etymology, this is an area of special focus for me in the tech industry and academia, as is the history of computing.
Clear terminology is critical for any technical field, and good terminology is supported by etymology.
One of my favorite topics is what "Artificial" and "Intelligence" mean, and what they mean together. My research indicated the latter goes back to the pie \*legh/-leg as "discrimination in the sense of selection". Think gatherers choosing which berries to gather from a set of berries, or hunters identifying the most vulnerable member in a herd. (These are combinatorial problems in a naive sense, and definitely game theory problems—maximizing the minimum reward, or minimizing-the-maximum loss.) The Latin prefix "inter" means "between" or "among" in the sense of "matters", originally legal, lexical and so forth, which is to say "intellectual", because the Latin *lex* comes from the Greek *logos*. I even argue that, as a function, it is a grounded symbol. I express it M(u): "in an action space, utility", which can then be extended as with Hutter&Legg's formal definition of universal machine intelligence.
I see other interesting etymologies. As a student of Norse mythology, I can't help but think about the Norns (Fates) when I see the word "entanglement". Heisenberg pretty much coined it, if I recall correctly, and, because Old Norse, Standard German, and English all come from the Germanic branch, it translates quite precisely. I don't see this as a random choice, because H was talking about probabilities, which is a commentary on fate. The Norns wove (entangled) and cut (limit) the fates of human beings. I feel like Norse mythology allowed for free will on a local level (choosing to die in battle to gain entrance to Valhalla), as opposed to a global level (Ragnarok is predetermined.)
Upvotes: 1
|
2021/07/16
| 1,174
| 4,958
|
<issue_start>username_0: In current AI, most of the research work is happening in machine learning. Lot of buzzwords are popping up in this domain.
And most of the experts are into the implementation of new models, algorithms etc.,
It seems to me that it is currently impossible for us to collaborate with the related sites. And it is undeniable fact that we have very few number of active experts.
With this context, I want to propose the following line
>
> Slowly and strategically widen the scope of our site by pushing our boundaries towards
> coding.
>
>
>
It is well-known that our site started as a science community, but, it cannot be a reason to continue it as a science-only site. We can increase our scope slowly by announcing the encouragement towards selected type of coding questions.
If we push the boundaries strategically, i feel that experts do visit and contribute to our site.
What is stopping us from pushing our boundaries?
Is it impossible to allow such questions in our main site since other two sites allow them? I think no.<issue_comment>username_1: I am against this idea because there are already other sites (DS SE and Stack Overflow) that cover these programming issues.
If we allowed this type of question, there would be more reasons to merge DS SE, CV SE, and AI SE, which, in my opinion, wouldn't really be a bad solution to the problem of having a limited number of experts, although, now, we cover topics that people that visit those other sites are or should not be really interested in (e.g. AGI) and people interested in AI and, in particular, AGI have a different ultimate goal than people interested only in ML or Statistics: AI is not just ML, that's why we exist.
Other sites do or should not cover certain aspects that we cover, such as AGI, cognitive architectures, or superintelligence. Unfortunately, not many people ask questions about these topics here anymore and we wouldn't also have many people prepared to answer those questions (because there aren't really many people interested in or researching these topics anyway), and most of the questions are just about the regular ML topics, so these questions could also have been answered on CV SE. For example, I was expecting [this question](https://ai.stackexchange.com/q/28662/2444) to have at least more upvotes (not to say an answer, as I don't believe there aren't many people that could answer this question: maybe I am one of the few that could attempt to answer that question, but I have not done it yet), but, apparently, almost nobody is interested in AGI anymore.
So, in my opinion, we should keep focusing on the **theoretical**, **philosophical**, and **social** aspects of Artificial Intelligence. In my view, it's fine to have a small community, provided that we can be self-sustained and we provide good-quality answers and ask good questions that can be useful in the future for anyone interested in AI and AGI. Moreover, it's also normal that we have many unanswered questions, as some of them are not trivial to answer or may not even have a definitive answer right now: there are still unsolved problems in AI, the first one being that we still don't know how to really create an AGI.
(Personally, I don't really want to see questions of the form "Why am I getting this IndexErrror in this ML program" here on AI SE. Generally, I am not interested in solving other people's programming issues/bugs (although occasionally I may do that on Stack Overflow). I am interested in the theoretical aspects of AI, I am interested in RL, AGI, explainable AI, evolutionary algorithms, and computational learning theory. Of course, these are just my personal tastes, but this is the main reason why I decided to stick to this site.)
So, if you're interested in programming, you should visit Stack Overflow. If you're interested in reinforcement learning, AGI (e.g. AIXI, Godel Machines, etc.), cognitive architectures, superintelligence, evolutionary algorithms, you should visit Artificial Intelligence Stack Exchange.
Having said that, there may still be some room to expand or redefine our scope, but, right now, I don't see many ways to expand our scope other than allowing questions about programming issues here.
Upvotes: 3 <issue_comment>username_2: Simple answer:
* Prior to username_1 and Dennis, our scope was insanely broad
This was because I was appointed mod by Stack when the prior mods left, knew there were issues, and felt that the community should decide what this stack needed to be.
We had a few years of this, with varied success, but did manage to attract enough committed, expert users, to yield our current form.
* We worked incredibly hard to narrow the scope to increase utility
I'm all for strategic expansion, at some point in the undetermined future, *iff* trusted users and mods agree. Presently, I agree with username_1, but I like your energy and enthusiasm, hanugm. Keep it coming!
Upvotes: 1
|
2021/07/26
| 1,126
| 4,877
|
<issue_start>username_0: Stack network is laid on foundation of contribution and helping others in finding their questions solved.
This question is only for the members who are dedicated to the stack community. You can see many users with high reputation contributing consistently to the community over years of time. Since our site is relatively new, it may have few such users. But, i want to know the practices of such users in their daily life related to stack network.
This question is intended only to know the practices of such dedicated users.
What are the practices of those users? Practices here include the tasks, routines, techniques of the users that help them in contributing consistently.
---
For example, I am expecting points like
1. I will always keep a couple of tabs of the site in a tab.
2. One tab contains new questions list and other contain active question list.
3. I always keep some messages in a draft for comments in a separate file ...... etc.,<issue_comment>username_1: I am against this idea because there are already other sites (DS SE and Stack Overflow) that cover these programming issues.
If we allowed this type of question, there would be more reasons to merge DS SE, CV SE, and AI SE, which, in my opinion, wouldn't really be a bad solution to the problem of having a limited number of experts, although, now, we cover topics that people that visit those other sites are or should not be really interested in (e.g. AGI) and people interested in AI and, in particular, AGI have a different ultimate goal than people interested only in ML or Statistics: AI is not just ML, that's why we exist.
Other sites do or should not cover certain aspects that we cover, such as AGI, cognitive architectures, or superintelligence. Unfortunately, not many people ask questions about these topics here anymore and we wouldn't also have many people prepared to answer those questions (because there aren't really many people interested in or researching these topics anyway), and most of the questions are just about the regular ML topics, so these questions could also have been answered on CV SE. For example, I was expecting [this question](https://ai.stackexchange.com/q/28662/2444) to have at least more upvotes (not to say an answer, as I don't believe there aren't many people that could answer this question: maybe I am one of the few that could attempt to answer that question, but I have not done it yet), but, apparently, almost nobody is interested in AGI anymore.
So, in my opinion, we should keep focusing on the **theoretical**, **philosophical**, and **social** aspects of Artificial Intelligence. In my view, it's fine to have a small community, provided that we can be self-sustained and we provide good-quality answers and ask good questions that can be useful in the future for anyone interested in AI and AGI. Moreover, it's also normal that we have many unanswered questions, as some of them are not trivial to answer or may not even have a definitive answer right now: there are still unsolved problems in AI, the first one being that we still don't know how to really create an AGI.
(Personally, I don't really want to see questions of the form "Why am I getting this IndexErrror in this ML program" here on AI SE. Generally, I am not interested in solving other people's programming issues/bugs (although occasionally I may do that on Stack Overflow). I am interested in the theoretical aspects of AI, I am interested in RL, AGI, explainable AI, evolutionary algorithms, and computational learning theory. Of course, these are just my personal tastes, but this is the main reason why I decided to stick to this site.)
So, if you're interested in programming, you should visit Stack Overflow. If you're interested in reinforcement learning, AGI (e.g. AIXI, Godel Machines, etc.), cognitive architectures, superintelligence, evolutionary algorithms, you should visit Artificial Intelligence Stack Exchange.
Having said that, there may still be some room to expand or redefine our scope, but, right now, I don't see many ways to expand our scope other than allowing questions about programming issues here.
Upvotes: 3 <issue_comment>username_2: Simple answer:
* Prior to username_1 and Dennis, our scope was insanely broad
This was because I was appointed mod by Stack when the prior mods left, knew there were issues, and felt that the community should decide what this stack needed to be.
We had a few years of this, with varied success, but did manage to attract enough committed, expert users, to yield our current form.
* We worked incredibly hard to narrow the scope to increase utility
I'm all for strategic expansion, at some point in the undetermined future, *iff* trusted users and mods agree. Presently, I agree with username_1, but I like your energy and enthusiasm, hanugm. Keep it coming!
Upvotes: 1
|
2021/08/02
| 924
| 3,585
|
<issue_start>username_0: Recently I requested an user as follows
>
> If you want to post it on meta, please tell me. Else I will do it.
>
>
>
The post, I am asking about, is related to the scope of the site. Since I am involved in the chat and curious to know whether my [question](https://ai.stackexchange.com/questions/29950/is-it-abuse-of-notation-to-use-tilde-operator-in-this-context) is on-topic or not, I requested the user for the information (whether the user is ready or not to post on meta).
Reason(s) for asking
1. To avoid duplication on meta
2. Curious to know, in detail, about on-topic-ness
3. Personally felt that the members of the community are friendly enough to ask
The user responded with the following statement
>
> I should not be asked beforehand if I intent or not to post anything
> on Meta.
>
>
>
I was active member on [other stack site](https://hinduism.stackexchange.com/users/661/hanugm). And it is normal there with almost all of the users to comment requests like above. In fact, I asked similar questions to few users in our sites also.
But I am doubting now whether to ask for such information in comments is recommended in general or not. Please guide me in this.<issue_comment>username_1: I would say that your question was perfectly fine. If a user doesn't want to say something about their intentions, it is also fine. If you think you're bothering/annoying a user, it may be a good idea to stop the discussion there (at least for a while), but, in any case, your question was totally fine from my perspective and, as far as I know, there's no policy that would prevent you from asking such a question (that would be quite absurd, in my opinion).
Upvotes: 2 <issue_comment>username_2: Since the user you are referring to is me, let me clarify some things to avoid misunderstandings.
In principle, there is **absolutely nothing wrong** in asking such questions in the comments.
The whole issue has also nothing to do with "friendliness" (we can disagree and still be friendly); in hindsight, I should have worded the comment slightly differently, as:
>
> I **need** not be asked beforehand if I intent or not to post anything on Meta.
>
>
>
and that's all.
IMO, I just think that such questions of *intent* are not particularly useful, and that's all. What if I had replied "*yes, I will*" and then do nothing (because I am busy, away, changed my mind etc)? What if I had replied "*no, I will not*", and then, on a second thought (everyone is entitled to a 2nd thought, right?), proceeded to open a thread at Meta? Why should I commit, at the certain point of time, about doing or not doing something in the future, and why this (non)-commitment from my side should affect you and your own intended actions? Even if we both decided to open a Meta question, there is no guarantee of sorts that these questions would be identical (either in spirit or in letter).
So, while nothing wrong, my *personal recommendation* here would be to refrain from doing it, only because it does not seem to me to be particularly useful or productive, and not for any other reason. Even if the recipient chooses *not* to reply (for any reason, including that they have not made up their mind yet), they run the danger of looking somewhat rude; why you would want to put anyone in such a (potentially awkward) situation?
So, that was all behind my own comment, and nothing more. Hope it is clear now, and if it came out in any unfriendly way, let me hereby assure you that something like that was nowhere close to my intentions.
Upvotes: 3
|
2021/08/03
| 498
| 1,998
|
<issue_start>username_0: Since there are at-least few users on this site who are active in past and are absolutely silent now.
Can we ping them in chats or in comments or some way to let them know that we need their contribution again like past? Is it a good practice to do so or are we disturbing them?<issue_comment>username_1: I don't know if someone has done that in the past or not, but I don't think it's a very good idea. These users probably don't contribute to the site anymore because they don't have the time or desire to do it. Most, if not all, of them are volunteers, like you and me, so we should not bother them with these requests.
It seems to me that a user usually remains because he likes the idea of helping others and maybe also learning something by reading other people's answers/questions and, in the meantime, he/she also finds the time to do it. I've stayed around because I thought that this community could be useful and I like to help others whenever I can, so not because someone asked me to do it.
Upvotes: 4 [selected_answer]<issue_comment>username_2: My sense is that interest waxes and wanes over the course of years. Sometimes the user becomes re-inspired and comes back, sometimes not.
I have multiple stacks where I either have high current participation, or don't have time/inclination to think about. This usually ping pongs with my core areas of interest.
But you're on the right track that what we need to do is attract more qualified and earnest users. My suggestion:
* **Post the Q&As you find most rewarding on social media**
Encourage others to do this.
I'm not sure the wider AI community realizes what a great and reliable resource is Stack:AI. And I've looked at the quality of AI information on less rigorous forums and Q&A, such that, with certain exceptions cases, I stick to stack.
See also: [List of exemplary questions on AI theory](https://ai.meta.stackexchange.com/questions/1627/list-of-exemplary-questions-on-ai-theory)
Upvotes: 1
|
2021/08/07
| 569
| 2,389
|
<issue_start>username_0: If I post a question and receives and answer. Then, in general, I use to upvote if it make sense to me and accepts the answer if it is apt to the context of the question.
But I cannot say whether the answer is correct or wrong.
If once I get realize that the answer may be wrong. I think it is okay to remove my upvotes or acceptance in order to not endorse to new users that answer is correct.
Is it okay to do so? Or should I not upvote or accept till I am completely sure on the answer which may take a long amount of time.<issue_comment>username_1: I think it's not just fine but you should remove the upvote and an unaccept the answer until you are sure that the answer is correct or not (note that this is just my personal suggestion, so it's not a policy). You could, for example, leave the upvote because you think the answer is useful, although it may not be (fully) correct. That's also fine (but if it turns out to be wrong, you should remove the upvote, so that it's not misleading). Of course, the answerer may not like the idea. However, once you know the answer was actually correct, you can upvote and accept it again. It may not be a good idea to share this with the answerer, as they may take it personally (that's why votes are anonymous), but you're free to explain your decision to the answerer, if you think they deserve an explanation.
Upvotes: 2 <issue_comment>username_2: This behavior is fully OK, and fully consistent with the site policies as well; there is a reason why such actions are indeed permitted by the site functionality.
IMO, the site policies should be even more relaxed in such matters: currently, you cannot remove your vote (up or down) after a certain time, unless the post gets edited - but I am not sure how useful such a restriction is (except maybe for forcing you to be more careful when voting, but I doubt this is useful either).
Upvotes: 2 <issue_comment>username_3: I've grappled with this on rare occasions, where I can't undo unless the question or answer is edited. My metric for whether I'm going to edit, and probably piss someone off is:
* How much harm does it cause
If it's trivial, I might leave it, especially if it's just an upvote. But I think, in the case of accepted answers, where it would cause harm, one just has to bite the bullet and edit the post and remove the acceptance.
Upvotes: 1
|
2021/08/14
| 2,233
| 9,209
|
<issue_start>username_0: A few days ago, I posted [this question](https://ai.stackexchange.com/questions/30150/is-this-a-valid-argument-against-the-possibility-of-agi) that was later closed for being off-topic. I don't mean to beat a dead horse, but I genuinely don't think the question broke any rules.
The question I asked was:
>
> **Question.** Is my argument valid? Are there any significant holes or logical fallacies? Has any form of this argument been made before?
>
>
>
Here's why I don't think this question was off-topic:
* [This help article](https://ai.stackexchange.com/help/on-topic) lists the following topics as off-topic: career path recommendations, general programming questions, implementation questions unrelated to theoretical topics, and questions seeking pre-trained models. My question does not fall into any of these categories.
* [This help article](https://ai.stackexchange.com/help/dont-ask) describes the type of questions that should be avoided. In particular, it states that subjective questions should usually be avoided, but are sometimes okay. But even if my question is to be classified as subjective, it fulfills all the criteria for an allowable subjective question.
* The comments mentioned that my post was "not a question, but a discussion point, and "this could lead to discussions because some of your assumptions may not be correct."
Though I did ask a specific question, I understand that it could have lead to discussion. But doesn't every question lead to some degree of discussion? I don't see how my question, which was focused on a specific argument against AGI, would lead to more discussion than open-ended questions like [this](https://ai.stackexchange.com/questions/26007/are-there-any-approaches-to-agi-that-will-definitely-not-work?noredirect=1&lq=1) and [this](https://ai.stackexchange.com/questions/7875/is-the-singularity-something-to-be-taken-seriously).
Also, if someone finds incorrect assumptions in my question, they could have posted those as an answer. My question was asking whether or not I made any incorrect assumptions or other logical missteps.
* I meant for my question to be like the [proof-verification](https://math.stackexchange.com/questions/tagged/solution-verification?tab=Newest) questions on Math Stack Exchange. I didn't mean for it to be some kind of ongoing debate or discussion. I was looking for answers of the form, "This argument is flawed because \_\_\_\_\_."
**Question.** Why was my question marked off-topic? Is there a specific rule/guideline that I broke?<issue_comment>username_1: I reviewed the question, which I like very much, but here's why it was closed:
It's more of a thesis that gets around to the question. In the previous incarnation of this stack, we were allowing it. But it becomes too easy to abuse, and so the community felt it was better not to allow.
I don't see this question as that, but I think it would be more suitable if you addressed a single claim per question. I want to see more of these questions, so I hope you'll give the subject another shot.
Upvotes: 1 <issue_comment>username_1: **Repost of Question in question:**
The following is an argument for why I don't think artificial general intelligence (AGI) is technologically feasible with *machine learning methods*. There are likely many flaws in my argument, but I do find the overall idea to be compelling.
**Question.** Is my argument valid? Are there any significant holes or logical fallacies? Has any form of this argument been made before?
---
**TL;DR.** Training an AGI would require resources (computational power and data) comparable to all the resources that nature has invested into the evolution of human beings. Given our current abilities, this does not seem feasible.
**Claim 1.** Most of human knowledge is encoded into DNA.
Consider our knowledge of language. Why is it that the language model GPT-3 needed to be trained on hundreds of billions of words over thousands of GPUs to develop language skills comparable to what a human can develop in just several years? The answer is DNA. Humans already have language skills genetically hard-wired into them when they are born, which allows a human baby to learn language significantly faster than a computer can (this idea is similar to the [purely speculative] idea of a [language acquisition device](https://en.wikipedia.org/wiki/Language_acquisition_device)). More importantly, there is a huge difference in the amount of time and energy it takes a computer to learn language compared to a human. This huge difference indicates that most of an adult human's language knowledge is not learned within their short lifetime, but rather encoded through their DNA. A similar argument can be made for other aspects of human knowledge.
So if most of human knowledge comes from DNA, how is this knowledge obtained? I argue that the answer lies in evolution.
**Claim 2.** Natural selection can be viewed as a machine learning system.
To generate the DNA of an intelligent animal, let's suppose we treat this animal as a machine learning model. The parameters of this model are segments of DNA, and the optimization objective is to maximize the likelihood of an animal's survival. Natural selection trains this model via the following process:
1. Start with $n$ animals.
2. Let $\mathcal{L}$ be a function that computes the likelihood of an animal surviving. Apply this function to each of our $n$ animals to determine whether they survive. Suppose at the end, we have $n'$ animals remaining.
3. Let $c$ be the number of children each animal gives birth to, on average. Using the $n'$ available animals, mix and match their genetic codes and add random mutations to create $cn'$ new animals.
4. Repeat steps 2 and 3 with $n := cn'$.
By definition, this process describes a machine learning algorithm. For instance, consider the following analogous method of training a neural network:
1. Start with $n$ neural networks with randomized weights.
2. Let $\mathcal{L}$ be the loss function of the neural network. Pick the $n'$ neural networks with the smallest loss.
3. Let $c$ be some constant. Using the $n'$ available neural networks, mix and match their weights and make random adjustments to create $cn'$ new neural networks.
4. Repeat steps 2 and 3 with $n := cn'$.
Of course, this is arguably much worse than backpropagation and gradient descent. I return to this point at the end of my argument.
**Claim 3.** Nature has invested significant computational power in “training” humans.
We might think of nature as a large simulation. Every worldly event, whether it be the wind or the rain or the inner workings of a plant, requires “computational power” to execute. Furthermore, humans have taken hundreds of millions of years to train, and each year, natural selection has probably processed billions of animals (this number is just a wild guess, but I would say it's very conservative) that are relevant to the evolution of humans. Let's say it takes a modern computer, on average, one day to simulate the life of one such animal. Combining these estimates, the total amount of computational time $T$ required to “train” a human is given by:
$$
\begin{align}
T &= (\text{# of years}) \cdot (\text{animals processed per year}) \cdot (1 \ \text{day}) \\
&= 10^8 \cdot 10^{9} \cdot (1 \ \text{day}) \\
&= 3 \cdot 10^{14} \text{years}.
\end{align}
$$
That's a lot of time!
**Claim 4.** Nature has invested significant data in “training” humans.
The argument here is similar to the previous claim: everything natural event (“whether it be the wind or the rain or the inner workings of a plant”) is a piece of data that may have been used to “train” human beings. The sum of natural events that have been relevant to the training of humans has been large, so the amount of data that has been invested into the training of humans has also been large.
**Argument.** From the above four claims, we may deduce three plausible scenarios in regard to AGI:
1. AGI will not be developed using machine learning methods.
2. AGI will be developed with machine learning methods that are many, many, MANY orders of magnitude more efficient than natural selection (in regards to data efficiency and computational efficiency). (See claims 3 and 4 for an idea of what “many, many, MANY” means.)
3. AGI will be developed using computers that are many, many, MANY orders of magnitude more powerful than current computers.
Scenario 3 seems unlikely given the failure of Moore's law. Scenario 2 sounds more reasonable, but given how AI has developed in the past few decades, I would still say scenario 2 is rather unlikely. Currently, we don't have machine learning methods that perform significantly better than neural networks (+ variants like RNNs and CNNs) and gradient descent (+ variations like Adam). While these methods are undoubtedly better than the procedure of natural selection I described above, I don't think they are “many, many, MANY orders of magnitude” more efficient, especially considering how inefficient the optimization of deep neural networks is. Therefore, scenario 1 is most likely to happen, at least in the near future.
Upvotes: -1
|
2021/08/20
| 358
| 1,447
|
<issue_start>username_0: The title says it all. @nbro has helped in increasing clarity in my questions. Is there anything else I can do?<issue_comment>username_1: Yes, to attract attention to your posts/questions, you can start a **bounty** on them. You should carefully read [this](https://ai.stackexchange.com/help/bounty), which explains what a bounty is and what the consequences are (i.e. you will not get your reputation back even though you're not satisfied with the answers or it's not even guaranteed that you will get an answer at all).
Upvotes: 2 <issue_comment>username_2: Bounties can be an incentive, by my sense is they are mostly attractive to people new to a given stack, who want to build initial rep quick. (There are good reasons for this, as it confers a series of privileges.)
When I'm new on a stack, I seek out questions I can answer to get basic mod privileges quickly, so I can start doing some "housekeeping" on those stacks when time permits.
My sense is that users with sufficient rep often transition to answering the questions that pique their interest, or occupy a subject in which they feel it's important to educate.
Since your rep is low, and you don't have a lot to spend, I might reserve bounties only for critical questions.
*(My own questions typically don't do well on any stack, and I've just had to accept that, but I still find participation in the stack community incredibly rewarding:)*
Upvotes: 1
|
2021/09/29
| 480
| 1,991
|
<issue_start>username_0: Consider the following [introduction lines](https://ai.stackexchange.com/tour) regarding our main site
>
> Artificial Intelligence Stack Exchange is a question and answer site
> for people interested in the theoretical (including mathematical),
> **philosophical**, social, historical, and certain developmental and
> academic aspects of artificial intelligence.
>
>
>
What exactly is the philosophy that is related to artificial intelligence? I can understand the other aspects such as mathematical social historical aspects of artificial intelligence, but it is not known to me what is meant by the philosophical aspect of artificial intelligence.<issue_comment>username_1: Yes, to attract attention to your posts/questions, you can start a **bounty** on them. You should carefully read [this](https://ai.stackexchange.com/help/bounty), which explains what a bounty is and what the consequences are (i.e. you will not get your reputation back even though you're not satisfied with the answers or it's not even guaranteed that you will get an answer at all).
Upvotes: 2 <issue_comment>username_2: Bounties can be an incentive, by my sense is they are mostly attractive to people new to a given stack, who want to build initial rep quick. (There are good reasons for this, as it confers a series of privileges.)
When I'm new on a stack, I seek out questions I can answer to get basic mod privileges quickly, so I can start doing some "housekeeping" on those stacks when time permits.
My sense is that users with sufficient rep often transition to answering the questions that pique their interest, or occupy a subject in which they feel it's important to educate.
Since your rep is low, and you don't have a lot to spend, I might reserve bounties only for critical questions.
*(My own questions typically don't do well on any stack, and I've just had to accept that, but I still find participation in the stack community incredibly rewarding:)*
Upvotes: 1
|
2021/09/30
| 907
| 3,589
|
<issue_start>username_0: I find this an unfortunate choice by Searle in the present era, and argue
* The Grecian Room is more suitable
My feeling is that Chinese Room creates a negative perception of Asian people and Asian Americans as "other". When Searle used it, this notion of otherness was surely an influence. At the time, it wouldn't have been seen as a problematic choice, but it bothers me every time I have to reference it.
* I don't think the thought experiment is so important that we have to use that name.
This would be one argument:
>
> <NAME>, here understood as a narrative philosopher, wrote about the difference between xenoglossia and glossolalia, which comment on Searle, in regard to Ancient Greek specifically. [See VALIS trilogy.] Dick is a major narrative philosopher along with Asimov, Lem, and recently, Rajaniemi. Dick and Asimov have probably had more influence that Searle in the public understanding of AI. They use mythology of AI to explore social concepts in the manner of Plato.
>
>
>
This is sort of a [Washington Redskins](https://en.wikipedia.org/wiki/Washington_Redskins_name_controversy) type of deal here—and that's my home team.
Searle is important, but I don't think he's foundational in the same way as <NAME>, Turing, Shannon, Godel, Hilbert, etc.
I don't think Searle's intentions were bad, but I don't like this label in 2021.<issue_comment>username_1: Whether or not people like this name, this is how it has been known in the AI community for many years, so I don't think we should relabel it. It's called the Chinese-Room argument because it questions/involves the "understanding" of the Chinese language, which many people think to be very difficult, more difficult than, for example, German or Russian. (Of course, for Chinese people, it might be easier than other languages, but maybe not necessarily). So, in a sense, this is a good name for the philosophical argument because it is suggestive/descriptive.
I don't think it creates a negative perception of Asian people. I've never thought of that. Searle had to pick something that people would think that you really need "understanding" to deal with and you can't just manipulate symbols. The Chinese language was chosen probably because it may be difficult for many people.
Upvotes: 0 <issue_comment>username_2: Of course it is also unfair on a large number of Chinese-speaking AI reseachers that the metaphor in Searle's argument makes less sense to them (imagine the "English Room"). I would support re-naming it for clarity in this case, separately to any concerns of causing offense.
However, I don't think AI Stack Exchange or its meta site is the forum for renaming things, beyond noting the issue for reference (as the question does). AI Stack Exhange is not a leading/influencing site for AI researchers and writers.
Our task is to be a repository of questions and answers. If someone asks "What is the Chinese Room argument in AI?" they would reasonably expect to find an answer here. No-one is going to ask "What is the Graecian Room argument?" or see any other name for the analogy that is associated with Searle.
The best you can do here - and I note you have - is to make your alternative suggestion when answering a question on the topic. Until any influence on the subject spreads out to other sites and media such that a new name takes hold, then AI Stack Exchange should continue to refer to the name that Searle gave the problem, with maybe an aside or footnote with other suggestions, or maybe linking this meta question.
Upvotes: 2
|
2021/12/28
| 487
| 1,866
|
<issue_start>username_0: ###### Raising concern here as Flags and request for moderator intervention has failed. Requesting intervention of AI staff moderators.
It has come to my attention this question(<https://ai.stackexchange.com/questions/32012/educational-resources-and-programming-languages-for-ai-ml/32016#32016>) has been deleted by **diamond moderator [nbro](https://ai.meta.stackexchange.com/users/2444/nbro)** without cause.
Kindly note this question was locked due to historical significance.
[](https://i.stack.imgur.com/mit3Y.png)<issue_comment>username_1: Your answer was deleted along with the question, which was too broad and partially off-topic. Questions that are off-topic and are of no value to our community can be deleted by moderators. Our community focuses the **theoretical aspects of artificial intelligence**. Asking for books, sources, programming languages, etc., all at the same time is too broad, which can lead to poor answers, so this is discouraged.
Please, read [Why and how are some questions deleted?](https://ai.stackexchange.com/help/deleted-questions), which states
>
> Questions that are **extremely off topic**, or of **very low quality**, may be removed at the discretion of the community and **moderators**.
>
>
>
Upvotes: 2 <issue_comment>username_2: Aside from being off-topic, the "locked due historical significance" seems rather dubious and definitely does not satisfy the [reasons for locking a post in such a way listed here](https://ai.stackexchange.com/help/locked-posts). As hanugm rightfully pointed out in a comment to that now-deleted question, we already have plenty of much older questions of a similar nature with much more activity on them. We're not losing anything major by deleting this, so the deletion is fine.
Upvotes: 1
|
2021/12/28
| 575
| 2,080
|
<issue_start>username_0: ##### Raising concern here as Flags and request for moderator intervention has failed. Requesting intervention of AI staff moderators.
It has come to my attention this question has been deleted by **diamond moderator [nbro](https://ai.stackexchange.com/users/2444/nbro)** without cause.
Why did diamond moderator misuse his powers to delete all my answers and associated questions?
**Nbro** kindly provide an explanation
**UPDATE** : This question closed as off topic 2 months ago, A just and unbaised moderator would have done the right thing and transfer to relevant SE site instead of deleting recently. **Was it some sort of personal vendetta against me?**
<https://ai.stackexchange.com/questions/32164/how-to-take-data-at-regular-intervals/32169#32169>
[](https://i.stack.imgur.com/aZM3J.png)<issue_comment>username_1: As written in [my other answer](https://ai.meta.stackexchange.com/q/2837/2444), this question was deleted because it was **off-topic and unclear**, so **very poor**. Programming questions are off-topic here. Please, take a look at [our on-topic page](https://ai.stackexchange.com/help/on-topic).
Upvotes: 2 <issue_comment>username_2: As [nbro already wrote](https://ai.meta.stackexchange.com/a/2840/1641), this question was both off-topic and of extremely poor quality, so deletion is fine.
>
> This question closed as off topic 2 months ago, A just and unbaised moderator would have done the right thing and transfer to relevant SE site instead of deleting recently.
>
>
>
Probably the only valid target site to migrate towards would have been StackOverflow in this case. However, [generally we do not want to migrate poor-quality questions](https://meta.stackexchange.com/a/10250/376651), which this question was. It showed no effort at all, did not demonstrate what the author already attempted, it's basically a "please do my work for me" question. It would have been massively downvoted or even again deleted over on StackOverflow too.
Upvotes: 3
|
2021/12/28
| 711
| 2,784
|
<issue_start>username_0: ##### Raising concern here as Flags and request for moderator intervention has failed. Requesting intervention of AI staff moderators.
Is it general norms that a **diamond moderator[nbro](https://ai.stackexchange.com/users/2444/nbro)** can delete an answer to prevent a user from helping others or getting bounty points for their efforts?
It has come to my attention my answer was deleted, without cause [Why is gradient descent used over the conjugate gradient method?](https://ai.stackexchange.com/questions/32428/why-is-gradient-descent-used-over-the-conjugate-gradient-method/32559#32559)
Kindly provide an explanation!
[](https://i.stack.imgur.com/rncbs.png)<issue_comment>username_1: This answer was deleted because you copied content from a paper without clarifying which parts you copied, so you basically tried to make it seem that the content in that answer was yours, while, in reality, you copied many things (if not everything) from that paper, i.e. plagiarism.
Plagiarised content, as I explain in [my other answer](https://ai.meta.stackexchange.com/a/2836/2444), is subject to deletion. People that plagiarise can also be suspended, so, please, avoid doing this next time. Do not copy and paste content from external sources without clearly explaining which parts were taken from the external source.
If you don't know how to quote certain parts from an external source, take a look at this: [How to reference material written by others](https://ai.stackexchange.com/help/referencing).
Moreover, ideally, you should not just quote an excerpt from a paper, but you should explain with your own words.
Having said that, if you're not familiar with the topic, I would recommend that you do not attempt to provide an answer, in order to avoid spreading misinformation or misunderstandings. However, this is just my personal suggestion.
Upvotes: 2 <issue_comment>username_2: >
> Is it general norms that a diamond moderator username_1 can **delete an answer to prevent a user from helping others or getting bounty points for their efforts**?
>
>
>
No, those reasons as highlighted in bold would not be valid reasons for deletion. There is no indication that these were actually the reasons for deletion in this case though.
>
> It has come to my attention my answer was deleted, **without cause**
>
>
>
It was not without cause. The reason for deletion (plagiarism, which is a valid reason) was already pointed out by username_1 in a comment shortly before/after deletion, and if I'm not mistaken you as author of the original post should still be able to read that even though it was deleted since (just like how you can still read the post itself).
Upvotes: 3
|
2022/01/14
| 742
| 3,174
|
<issue_start>username_0: It is important for every member of our community to edit the questions and answers in order to enhance the quality of our site.
There are several reasons for editing. Few of them are
1. To remove spelling mistakes
2. To improve grammar
3. To rephrase sentences
4. To add relevant tags
5. To fix technical issues
6. To help new users
7. To improve posts that are vague with the help of comments from the asker
and so on.................
And after editing the post, it is optional to provide the edit summary. In general, I ignore the edit summary. And I am unaware of how it will be useful to anyone later.
In this context, I want to know **how important** it is to provide the edit summary?
Suppose the activities like asking questions, answering questions, upvoting and downvoting questions are so much important. Editing and improving existing questions are also comparably important. I want to know the relative importance of providing the edit summary so that I can be more focused on the task.<issue_comment>username_1: I don't think you need to provide a custom edit summary **for every edit**, especially when you already have the privilege to edit posts automatically.
In fact, if you have the privilege to automatically edit posts, then it means that, **in theory**, the community trusts you and that you're going to edit posts appropriately, which means that, for example, you will not make the post unclearer, introduce inappropriate tags or change the meaning of the question.
In particular, unless you want, I would say that there's no need to provide an edit summary when you're just fixing typos or improving the language. However, if you're making a **substantial edit**, e.g. removing unnecessary details, changing all tags, completely rewriting the title, or even rewriting the question completely differently (but make sure that the meaning of the questions is the same!), then, in that case, I really encourage you to provide an edit comment, which summarises and **motivates** your edit. See [this example](https://ai.stackexchange.com/posts/34153/revisions).
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're suggesting an edit, and this is from experience, a good, descriptive edit reason is going to ensure your edit goes through. I've had at least one edit rejected cause I had an excessively clever edit reason involving grumpy cat which made no sense. Even Random, mysterious edit reason writer in chief on Super User now gives a reason (in brackets) for [edits](https://superuser.com/users/307/random?tab=activity&sort=revisions)
I would treat this as I would a commit to a git repository. Its *harder* to see a small edit at a glance than a big one. I'd also add as a active user on a larger site with a significant number of questions, you're not going to remember a post you did 10 years ago, and you might find yourself looking at something just as old for answers. Spending the extra minute or so for a descriptive edit reason just like the ones you've posted above is an investment in the future
So I would say yeah, there's value in it. Its not mandatory but its good practice.
Upvotes: 2
|
2022/01/17
| 1,407
| 6,031
|
<issue_start>username_0: Please consider [this](https://ai.stackexchange.com/questions/30012/do-researchers-generally-treat-tensors-just-as-mathematical-objects-with-certain) question for better understanding the question as this doubt originated while going through the question.
The core part of the question is **How do researchers generally treat tensors?**
For this question, I think, it is better if I modify a certain portion of the answer as follows
>
> I would say they are treated as **multidimensional arrays of
> numbers**. They are not visualized in their actual dimension.
> Sometimes small ones will be visualized when someone is trying to
> explain a concept that requires it.
>
>
>
I just boldified the relevant part of the answer that needs attention.
Is such activity recommended without the knowledge of the actual poster?<issue_comment>username_1: Everyone will have slightly different boundaries on what feels ok as edits to their original content. I'm usually fine with typo and layout corrections. Most of the edits I see are due to someone editing the question for typos or clarity, then editing my answer where it quotes the old version of the question. But I do also make plenty of typos myself, and appreciate it when someone finds and corrects one.
The more subjective an edit is, the more it deserves an attempt at conversation first. Obvious typos and mistakes don't need any extra conversation, it is even wasteful of OP and editor's time. However, emphasis can subtly change the meaning of text, so is worth a second or so of thought, and maybe communication.
I would say in this case, an edit to bold or otherwise emphasise a key phrase *done directly* could go too far. Although I would say that for me this is a small issue of politeness, and I would not be motivated to complain or do anything about it. As the OP, I get notified of edits, so if I felt the highlighting was not appropriate it is easy to roll back. If I was repsonsible for your example answer, I'd probably just leave your edit in place.
As an original poster I would prefer if possible a comment with the suggestion to highlight the key phrase, or better still a statement about what is hard to read or understand about the answer, so I can decide on a suitable edit - it may not need to be highlighting.
The following things make it OK (in my opinion) to directly edit to make this kind of improvement without discussion:
* The rest of the answer already uses bold highlighting for key phrases, and the one you want to highlight is an obvious candidate.
* You are already making other edits for layout and clarity, and the highlighting is one small extra.
* The answer is old, the original poster is not available, or has made the answer a community wiki.
Upvotes: 2 <issue_comment>username_2: I generally agree with Neil's suggestions.
I would also like to emphasize that making or not a word or sentence bold is sometimes a
* **matter of style** and
* it **depends on what the OP or you want to emphasize** (you see, here, **I think** that these two points are particularly important, so I made them bold).
The problem is that what you want to emphasize might not be what the OP thinks is really important. Maybe you're just not familiar with a concept and you think that a word needs to be emphasized, while, in reality, it's not really that important (for the OP, at least).
Having said that, I think that, especially **when an answer is long** and there are no subsections, titles, etc, but it's just plain text, it may be a good idea to edit the post to structure it in a way to emphasize the main points (which might also include making the most important words or parts of the answer in bold or maybe just italic). This would fall into the category of "improving the clarity/presentation of the post." If an answer contains only 2-3 lines, that might not be necessary. So, as a rule of thumb, I would say that the longer the answer the more necessary emphasis might be. But keep in mind that, if you make everything bold, that would have the opposite effect of what you originally wanted.
As the person with the highest number of edits on this site, I can say that I've edited posts for multiple reasons, including the ones mentioned in Neil's answers (and I've edited several of his answers for those same purposes). Sometimes, I've edited posts and the OP didn't like my edits. I would say that my edits usually improve the post (although, especially in the past, that might not always have been the case) and there are some stubborn users that don't really want their posts to be touched. In that case, it may be a good reason to flag the issue to a moderator or let the user know that, [if they are not comfortable with their posts being edited, maybe this site is not for them](https://ai.stackexchange.com/help/editing). **You can and should edit posts to improve the clarity, but the boundary between clarity and style is not always clear**. For example, I noticed that some users like to make certain parts of his answers bold or maybe use sections and bullets, while others don't really care about that and don't even use paragraphs and spaces (but sometimes that might just be because they didn't have the time or will to spend 1 hour to write a beautiful answer, so it might still be a good idea to edit those posts to improve their clarity!).
My final suggestions about your specific question are:
1. if you really think something needs to be made bold (because the answer is plain text and not easy to understand the key takeaways), then edit it, make it bold, or change the structure.
2. If the OP rolls back your edit, then just don't start an edit war, i.e. don't edit again, and maybe leave a comment explaining why you think that your edit was opportune, and/or flag the answer for a moderator to intervene
3. If you think that you would need to change many things in an answer to make it readable or valuable, just downvote. Don't lose your time with those answers.
Upvotes: 3
|
2022/05/24
| 872
| 3,731
|
<issue_start>username_0: I'm participating in the [ongoing election](https://ai.stackexchange.com/election), and, similar to one of the candidates, I have concerns about the statements made by some of the candidates.
Is there a way to reach out to the candidates and ask them questions?
My concern with the other candidates is *not* about who they are -- I know little about them -- but rather about questions they haven't answered:
1. Do you know what you are getting into? There's a lot of work involved in being a moderator. I appreciate that some candidates are taking the philosophy, "good moderators do as little as possible," but do they know how much work it takes to do this?
2. What is your philosophy of moderation? How do you deal with difficult moderation cases? I appreciate that some of the candidates desire to salvage what can be. How do they plan to deal with what cannot be salvaged?
I believe there are other questions I would ask if I could pose questions to the candidates either privately or publicly.
I'm a busy father and professional and unlikely to get too involved in this election. But at a minimum level of engagement, how can I constructively help willing candidates improve their plans for moderation?
---
Also, how many moderators total will be selected in this election? I read through the election page and feel that there's lots I still don't know about this election. Am I missing something?<issue_comment>username_1: Often in elections, there's a questionnaire compiled by the community full of similar questions to the ones you've posted that candidates are encouraged to answer. Since this is technically a pro-tem election, that didn't happen this time.
During the "nomination" phase of the election, you can leave comments on the nominations. Once voting has opened, that's no longer an option, unfortunately.
I personally can be found in [The Singularity](https://chat.stackexchange.com/rooms/43371/the-singularity), which is the main chat room for Artificial Intelligence Stack Exchange, and am willing to answer questions posed there.
---
As for how many moderators are being elected, on the top right of the /election page it says this:
>
> candidates 6 | positions 1
>
>
>
There are six candidates running; one will be selected to be added to the moderator team.
Upvotes: 2 <issue_comment>username_2: During the nomination phase, you could interact with the candidates. Unfortunately, I was the only one to do so, i.e. I was the only one to ask clarification questions, and I asked clarification questions to all of them (but username_1, which I interacted with in our chat room and I knew them a little bit since I was once a mod too). So, it seems that almost nobody is interested in selecting the right person for this mod role...
I wish people could still see the comments, but, anyway, 2 people didn't really address my concerns at all (most of them are about their activity and contributions to the site, which were lacking or poor). I will write here their names because I think it's important for the community to choose the best candidate: quintumnia and <NAME>. The other candidates tried, to some extent, to address my concerns, but I was not fully satisfied with their answers...
Personally, I think most nominations/candidates are not suitable for the role because, actually, all of them have been inactive for many years or never contributed to the site at all in any way, and only became active to nominate themselves (can you believe this? your judgement here!!!) and only some of them have showed to have some knowledge of AI and only 1-2 know how SE really works and seem to have an idea of what it means to be a mod.
Upvotes: 0
|
2022/10/17
| 1,168
| 4,213
|
<issue_start>username_0: I recently noticed that we have a [homework](https://ai.stackexchange.com/questions/tagged/homework "show questions tagged 'homework'") tag here on AI.SE. I'm not sure exactly when it was created, but this tag doesn't seem like a useful tag to me, for basically the same reasons that [Stack Overflow got rid of the homework tag ten years ago](https://meta.stackexchange.com/q/123758/294691).
The tag does not describe the content of the question; it describes where the question came from. While the fact that it's homework can occasionally be useful - especially when determining if something is plagiarism - a tag is not a useful way of determining that. People don't generally want to sort explicitly homework questions, because there's no particular uniting factor aside from their origin. The topic can vary wildly.
This doesn't seem like a useful tag; does anyone have any objections to burninating it?<issue_comment>username_1: Generally speaking it is the [advice on MSE](https://meta.stackexchange.com/search?q=homework+tag) to not allow the [homework](https://ai.stackexchange.com/questions/tagged/homework "show questions tagged 'homework'") tag (mostly applicable to SO), ***but*** there is an important exception: [some sites prohibit full answers](https://meta.stackexchange.com/a/165683/282094) to homework questions.
Of course, should we disallow the tag, [moderators can ban](https://meta.stackexchange.com/q/19018/282094) the tag's usage.
This is the [first non-closed usage](https://ai.stackexchange.com/q/10910/17742), it seems helpful; as indicated by the discussion there.
Here are three examples were someone added the tag with an edit ([1](https://ai.stackexchange.com/q/14032/17742), [2](https://ai.stackexchange.com/q/21879/17742), [3](https://ai.stackexchange.com/q/21885/17742)), and then went on to offer what looks to be a complete answer; @Nbro may have thoughts on the usage of the tag.
Other than that, the early usages of the tag seem helpful, assuming that we don't want to do people's homework for them.
Upvotes: 2 <issue_comment>username_2: * [Quantum Computing has the "textbook-and-excerices" tag](https://quantumcomputing.stackexchange.com/questions/tagged/textbook-and-exercises).
* [Chemistry has the "homework" tag](https://chemistry.stackexchange.com/questions/tagged/homework) but it has been deprecated.
* [Physics has the "homework-and-exercised" tag](https://physics.stackexchange.com/questions/tagged/homework-and-exercises) and it has **22,000+ questions**.
* I didn't see any "homework" or "exercises" tag on Mathematics, but I'm not sure if the search was just taking too long to work.
If the homework tag here is burninated, it shouldn't be because SO burninated it, nor should it be for "basically the same reasons" as the ones SO used to justify burninating it. The tag should be burninated only if the ***consensus*** within ***this specific specific community*** is to do so.
Luckily, the [homework](https://ai.stackexchange.com/questions/tagged/homework "show questions tagged 'homework'") tag here at AI only has 19 questions and 0 watchers so far, so it wouldn't be a big deal to burninate it, but I think the reason to do so has to be a good one (not just "SO did it so we will do it too").
Upvotes: -1 <issue_comment>username_3: I believe this tag can be useful, and I think it should be kept, independently of the consensus on this topic that we see in meta or other SE sites. It doesn't really harm the site.
The reason why I think it should be kept is simple:
>
> clearly, many of our visitors are students, and we may not want to give them the full answers to their homework problems/questions.
>
>
> Students may reformulate their questions in ways that do not look like homework questions and, in general, it may not be easy to distinguish a homework question/problem from a non-homework one, but, if someone decides to use this tag, they want to let us know that we probably shouldn't give them the complete answers, although one could still do that.
>
>
>
Nevertheless, I don't really have a very strong opinion on this topic. If you want to remove it, it wouldn't also harm much the site.
Upvotes: 1
|
2016/08/02
| 1,193
| 4,137
|
<issue_start>username_0: What does "backprop" mean? Is the "backprop" term basically the same as "backpropagation" or does it have a different meaning?<issue_comment>username_1: "Backprop" is the same as "backpropagation": it's just a shorter way to say it. It is sometimes abbreviated as "BP".
Upvotes: 5 [selected_answer]<issue_comment>username_2: Yes, as Franck has rightly put, "backprop" means backpropogation, which is frequently used in the domain of neural networks for error optimization.
For a detailed explanation, I would point out [this tutorial](http://neuralnetworksanddeeplearning.com/chap2.html) on the concept of backpropogation by a very good book of <NAME>.
Upvotes: 2 <issue_comment>username_3: 'Backprop' is short for 'backpropagation of error' in order to avoid confusion when using *backpropagation* term.
Basically *backpropagation* refers to the method for computing the gradient of the case-wise error function with respect to the weights for a feedforward networkWerbos. And *backprop* refers to a training method that uses backpropagation to compute the gradient.
So we can say that a *backprop* network is a feedforward network trained by *backpropagation*.
The 'standard backprop' term is a euphemism for the *generalized delta rule* which is most widely used supervised training method.
Source: [What is backprop?](ftp://ftp.sas.com/pub/neural/FAQ2.html#A_backprop) at FAQ of Usenet newsgroup comp.ai.neural-nets
References:
* <NAME>. (1974). Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.
* <NAME>. (1994). The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting,Wiley Interscience.
* <NAME>. (1995), Nonlinear Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-14-0.
* <NAME>. and <NAME>. (1996), Neuro-Dynamic Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-10-8.
* <NAME>. (1964), "Some methods of speeding up the convergence of iteration methods," <NAME>. Mat. i Mat. Fiz., 4, 1-17.
* <NAME>. (1987), Introduction to Optimization, NY: Optimization Software, Inc.
* <NAME>., and <NAME> (1999), Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, Cambridge, MA: The MIT Press, ISBN 0-262-18190-8.
* <NAME>., <NAME>., and <NAME>. (1986), "Learning internal representations by error propagation", in <NAME>. and <NAME>., eds. (1986), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1, 318-362, Cambridge, MA: The MIT Press.
* <NAME>. (1974/1994), The Roots of Backpropagation, NY: John Wiley & Sons. Includes Werbos's 1974 Harvard Ph.D. thesis, Beyond Regression.
Upvotes: 3 <issue_comment>username_4: It's a fancy name for the multivariable chain rule.
Upvotes: 1 <issue_comment>username_5: We need to compute the gradients in-order to train the deep neural networks. Deep neural network consists of many layers. Weight parameters are present between the layers. Since we need to compute the gradients of loss function for each weight, we use an algorithm called backprop. It is an abbreviation for **backprop**agation, which is also called as error backpropagation or reverse differentiation.
It can be understood well from the following paragraph taken from [Neural Networks and Neural Language Models](https://web.stanford.edu/%7Ejurafsky/slp3/7.pdf)
>
> For deep networks, computing the gradients for each weight is much
> more complex,since we are computing the derivative with respect to
> weight parameters that appear all the way back in the very early
> layers of the network, even though the loss is computed only at the
> very end of the network.**The solution to computing this gradient is an
> algorithm called error backpropagation or backprop**. While backprop was
> invented for neural networks, it turns out to be the same as a more
> general procedure called backward differentiation, which depends on
> the notion of computation graphs.
>
>
>
Upvotes: 0
|
2016/08/02
| 758
| 3,184
|
<issue_start>username_0: Does increasing the noise in data help to improve the learning ability of a network? Does it make any difference or does it depend on the problem being solved? How is it affect the generalization process overall?<issue_comment>username_1: Noise in the data, to a reasonable amount, may help the network to generalize better. Sometimes, it has the opposite effect. It partly depends on the kind of noise ("true" vs. artificial).
The [AI FAQ on ANN](ftp://ftp.sas.com/pub/neural/FAQ3.html#A_noise) gives a good overview. Excerpt:
>
> Noise in the actual data is never a good thing, since it limits the accuracy of generalization that can be achieved no matter how extensive the training set is. On the other hand, injecting artificial noise (jitter) into the inputs during training is one of several ways to improve generalization for smooth functions when you have a small training set.
>
>
>
In some field, such as computer vision, it's common to increase the size of the training set by copying some samples and adding some noises or other transformation.
Upvotes: 4 [selected_answer]<issue_comment>username_2: We typically think of machine learning models as modeling two different parts of the training data--the underlying generalizable truth (the signal), and the randomness specific to that dataset (the noise).
Fitting both of those parts increases training set accuracy, but fitting the signal also increases test set accuracy (and real-world performance) while fitting the noise decreases both. So we use things like regularization and dropout and similar techniques in order to make it harder to fit the noise, and so more likely to fit the signal.
Just increasing the amount of noise in the training data is one such approach, but seems unlikely to be as useful. Compare random jitter to adversarial boosting, for example; the first will slowly and indirectly improve robustness whereas the latter will dramatically and directly improve it.
Upvotes: 3 <issue_comment>username_3: PS: *There is already some very good answers provided here, I will merely add to this answers in the hope that someone will find this useful:*
Introducing noise to a dataset can indeed have a positive influence on a model. In fact this can be seen as doing the same thing that you would normally do with [regularizers](https://en.wikipedia.org/wiki/Regularization_(mathematics)) like [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pd). Some of the example of doing this are [Zur at.al](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2771718/), [Cires¸at.al](https://arxiv.org/pdf/1003.0358.pdf) where the authors successfully introduced noise into the dataset to reduce over-fitting.
The catch is in knowing how much noise is too much. If you add too much noise, this might render your dataset useless in that the resulting dataset may no longer contain sufficient resemblance to the original dataset, so you might as well be training on a completely different dataset. Thus too much noise could be seen to cause under-fitting, just like extremely high dropout rates.
*As the saying goes; change balance is the spice of life :).*
Upvotes: 1
|
2016/08/02
| 943
| 3,609
|
<issue_start>username_0: When you're writing your algorithm, how do you know how many neurons you need per single layer? Are there any methods for finding the optimal number of them, or is it a rule of thumb?<issue_comment>username_1: There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches.
There exist more advanced techniques such as Gaussian processes, e.g. *[Optimizing Neural Network Hyperparameters with Gaussian Processes for Dialog Act Classification](http://arxiv.org/abs/1609.08703), IEEE SLT 2016*.
Upvotes: 5 [selected_answer]<issue_comment>username_2: For a more intelligent approach than random or exhaustive searches, you could try a genetic algorithm such as NEAT <http://nn.cs.utexas.edu/?neat>. However, this has no guarantee to find a global optima, it is simply an optimization algorithm based on performance and is therefore vulnerable to getting stuck in a local optima.
Upvotes: 3 <issue_comment>username_3: You know when you have too many neurons is when you get over fitting.
Meaning that it is not working good because
NN is trying to activate on the
most perfect match that is impossible. Like two different cats with the same amount of atoms, or to say, it is a detector NN that only activates
on a picture of you pet cat and nothing else. You want a wider range
for the nn to activate. Like on any picture of cat.
Overfitting is a problem that has no real quick fix.
You can start with too few and then keep adding more. Or start out with
a lot and then removing them until it works right.
Upvotes: 2 <issue_comment>username_4: Paper [<NAME>, <NAME>, <NAME>, et al. Rethinking the inception architecture for computer vision[J]. arXiv preprint arXiv:1512.00567, 2015.](https://arxiv.org/pdf/1512.00567.pdf) gives some general design principles:
>
> 1. Avoid representational bottlenecks, especially early in
> the network;
> 2. Balance the width and depth of the network. Optimal
> performance of the network can be reached by balancing
> the number of filters per stage and the depth of
> the network. Increasing both the width and the depth
> of the network can contribute to higher quality networks.
> However, the optimal improvement for a constant
> amount of computation can be reached if both are
> increased in parallel. The computational budget should
> therefore be distributed in a balanced way between the
> depth and width of the network.
>
>
>
These suggestions can't bring you the optimal number of neurons in a network though.
However, there are still some model compression research e.g. [Structured Sparsity Learning (SSL) of Deep Neural Networks](https://github.com/wenwei202/caffe/tree/scnn), [SqueezeNet](https://github.com/songhan/SqueezeNet-Deep-Compression), [Pruning network](http://papers.nips.cc/paper/5784-learning-both-weights-and-connections-for-efficient-neural-network.pdf) that may shed some light on how to optimizing the neurons per single layer.
Especially in [Structured Sparsity Learning of Deep Neural Networks](https://github.com/wenwei202/caffe/tree/scnn), it adds a `Group Lasso` regularization term in the loss function to to regularize the structures(i.e., filters, channels, filter shapes, and layer depth) of DNNs, which namely is to zero out some components(i.e., filters, channels, filter shapes, and layer depth) of the net structure and achieves a remarkable compact and acceleration of the network, while keeps a small classification accuracy loss.
Upvotes: 3
|
2016/08/02
| 650
| 2,494
|
<issue_start>username_0: Given the following definition of an intelligent agent (taken from a [Wikipedia article](http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Intelligent_agent_definition))
>
> If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent
>
>
>
and given that we, humans, all make mistakes, which means that we are not maximizing the expected value of a performance measure, then does this imply that humans are not intelligent?<issue_comment>username_1: It rather depends on how one defines several of the terms used. For example:
* Whether the term "expected" is interpreted in a formal (i.e.
statistical) sense.
* Whether it's assumed that humans have any kind of utilitarian
"performance measure".
The motivation for this description of "agent" arose from a desire to have a quantitative model - it's not clear that such a model is a good fit for human cognition.
However, there are alternative definitions of agents, for example the [BDI model](https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention_software_model), which are rather more open-ended and hence more obviously applicable to humans.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> "the human mind is a battleground of higher level goals and lower level goals "
> — <NAME> <NAME>
>
>
>
I argue that in general human agents try to maximise a hierarchy of performance measures.
performance measures of humans
==============================
* Survival of genetic data
+ Energy supply and Water
+ Sex
- *myriad subgoals....*
Mysterious mental mechanisms which neuroscientists do not understand yet force the average human agent to maximise various evaluation metrics.
With the overarching goal of **survival of genetic information**. Successful genes are immortal. We are still under the yoke of an ancient genetic algorithm.
These measures are optimised throughout a humans life time. A 30 year old agent is better at survival than a 10 year old agent. A 30 year old agent makes fewer mistakes.
We remember our mistakes. Mistakes are burned into our memory by high levels of neurotransmitters (and reinforcing of synapses) so we don't make them again.
We attempt to optimise a swarm of subgoals that are all connected in one way or another to the main goal **gene survival**.
* status
* money
* education
* happiness
Upvotes: 2
|
2016/08/02
| 1,947
| 7,444
|
<issue_start>username_0: This [quote by <NAME>](https://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-artificial-intelligence-could-wipe-out-humanity-when-it-gets-too-clever-humans-could-become-ants-being-stepped-a6686496.html) has been in headlines for quite some time:
>
> Artificial Intelligence could wipe out humanity when it gets too clever as humans will be like ants.
>
>
>
Why does he say this? To put it simply: what are the possible threats from AI (that <NAME> is worried about)? If we know that AI is so dangerous, why are we still promoting it? Why is it not banned?
What are the adverse consequences of the so-called [Technological Singularity](https://en.wikipedia.org/wiki/Technological_singularity)?<issue_comment>username_1: >
> To put it simply in layman terms, what are the possible threats from AI?
>
>
>
Currently, there are no threat.
The threat comes if humans create a so-called ultraintelligent machine, a machine that can surpass all intellectual activities by any human. This would be the last invention man would need to do, since this machine is better in inventing machines than humans are (since that is an intellectual activity). However, this could cause the machine to invent machines that can destruct humans, and we can't stop them because they are so much smarter than we are.
This is all hypothetical, no one has even a clue of what an ultraintelligent machine looks like.
>
> If we know that AI is so dangerous why are we still promoting it? Why is it not banned?
>
>
>
As I said before, the existence of a ultraintelligent machine is hypothetical. Artificial Intelligence has lots of useful applications (more than this answer can contain), and if we develop it, we get even more useful applications. We just have to be careful that the machines won't overtake us.
Upvotes: 2 <issue_comment>username_2: Because he did not yet know how far away current AI is... Working in an media AI lab, I get this question a lot. But really... we are still a long way from this. The robots still do everything we detailledly describe them to do. Instead of seeing the robot as intelligent, I would look to the human programmer for where the creativity really happens.
Upvotes: 2 <issue_comment>username_3: It's not just Hawking, you hear variations on this refrain from a lot of people. And given that they're mostly very smart, well educated, well informed people (<NAME> is another, for example), it probably shouldn't be dismissed out of hand.
Anyway, the basic idea seems to be this: If we create "real" artificial intelligence, at some point, it will be able to improve itself, which improves it's ability to improve itself, which means it can improve it's ability to improve itself even more, and so on... a runaway cascade leading to "superhuman intelligence". That is to say, leading to something that more intelligent than we area.
So what happens if there is an entity on this planet which is literally more intelligent than us (humans)? Would it be a threat to us? Well, it certainly seems reasonable to speculate that it *could* be so. OTOH, we have no particular reason, right now, to think that it *will* be so.
So it seems that Hawking, Musk, etc. are just coming down on the more cautious / fearful side of things. Since we don't *know* if a superhuman AI will be dangerous or not, and given that it could be unstoppable if it were to become malicious (remember, it's smarter than we are!), it's a reasonable thing to take under consideration.
Eliezer Yudkowsky has also written quite a bit on this subject, including come up with the famous "AI Box" experiment. I think anybody interested in this topic should read some of his material.
<http://www.yudkowsky.net/singularity/aibox/>
Upvotes: 3 <issue_comment>username_4: As <NAME> [said](http://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/), worrying about such threat from AI is like worrying about of overpopulation on Mars. It is science fiction.
[](https://i.stack.imgur.com/m6jnl.png)
That being said, given the rise of (much weaker) robots and other (semi-)autonomous agents, the fields of the law and ethics are increasingly incorporating them, e.g. see [Roboethics](https://en.wikipedia.org/wiki/Roboethics).
Upvotes: 2 <issue_comment>username_5: He says this because it can happen. If something becomes smarter than us, why would it continue to serve us? The worst case scenario is that it takes over all manufacturing processes and consumes all matter to convert it into material capable of computation, extending outward infinitely until all matter is consumed.
We know that AI is dangerous but it doesn't matter because most people don't believe in it. It goes against every comfort religion has to offer. Man is the end-all-be-all of the universe and if that fact is disputed, people will feel out of place and purposeless.
The fact is most people just don't acknowledge it's possible, or that it will happen in our lifetimes, even though many reputable AI experts put the occurrence of the singularity within two decades. If people truly acknowledged that AI that was smarter than them was possible, wouldn't they be living differently? Wouldn't they be looking to do things that they enjoy, knowing that whatever it is they do that they dread will be automated? Wouldn't everyone be calling for a universal basic income?
The other reason we don't ban it is because its promise is so great. One researcher could be augmented by 1,000 digital research assistants. All manual labor could be automated. For the first time, technology offers us real freedom to do whatever we please.
But even in this best case scenario where it doesn't overtake us, humans still have to adapt and alter their economic system to one where labor isn't necessary. Otherwise, those who aren't technically-trained will starve and revolt.
Upvotes: 2 <issue_comment>username_6: There are a number of long resources to answer this sort of question: consider <NAME>strong's book [Smarter Than Us](http://rads.stackoverflow.com/amzn/click/B00IB4N4KU), <NAME>'s book [Superintelligence](http://rads.stackoverflow.com/amzn/click/B00LOOCGB2), which grew out of this [edge.org answer](http://www.nickbostrom.com/views/superintelligence.pdf), [Tim Urban's explanation](http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html), or [<NAME>'s explanation](https://aisafety.wordpress.com/).
But here's my (somewhat shorter) answer: intelligence is all about decision-making, and we don't have any reason to believe that humans are anywhere near close to being the best possible at decision-making. Once we are able to build an AI AI researcher (that is, a computer that knows how to make computers better at thinking), the economic and military relevance of humans will rapidly disappear as any decision that could be made by a human could be made better by a computer. (Why have human generals instead of robot generals, human engineers instead of robot engineers, and so on.)
This isn't necessarily a catastrophe. If the Vulcans showed up tomorrow and brought better decision-making to Earth, we could avoid a lot of misery. The hard part is making sure that what we get are Vulcans who want us around and happy, instead of something that doesn't share our values.
Upvotes: 2
|
2016/08/02
| 1,833
| 6,853
|
<issue_start>username_0: I'm new to A.I. and I'd like to know in simple words, what is the fuzzy logic concept? How does it help, and when is it used?<issue_comment>username_1: It's analogous to analogue versus digital, or the many shades of gray in between black and white: when evaluating the truthiness of a result, in binary boolean it's either true or false (0 or 1), but when utilizing fuzzy logic, it's an estimated probability between 0 and 1 (such as 0.75 being mostly probably true). It's useful for making calculated decisions when all information needed isn't necessarily available.
[Wikipedia has a fantastic page for this](https://en.wikipedia.org/wiki/Fuzzy_logic).
Upvotes: 3 <issue_comment>username_2: *As complexity rises, precise statements lose meaning and meaningful statements lose precision.* ( <NAME> ).
Fuzzy logic deals with reasoning that is approximate rather than fixed and exact. This may make the reasoning more meaningful for a human:
[](https://i.stack.imgur.com/xdHPJ.png)
---
Fuzzy logic is an extension of Boolean logic by <NAME> in 1965 based on the
mathematical theory of fuzzy sets, which is a generalization of the classical set theory.
By introducing the notion of *degree in the verification* of a condition, thus enabling a
condition to be in a state other than true or false, fuzzy logic provides a very valuable
flexibility for reasoning, which makes it possible to take into account inaccuracies and
uncertainties.
One advantage of fuzzy logic in order to formalize human reasoning is that the rules
are set in natural language. For example, here are some rules of conduct that a driver
follows, assuming that he does not want to lose his driver’s licence:
[](https://i.stack.imgur.com/TM2UE.png)
Intuitively, it thus seems that the input variables like in this example are approximately
appreciated by the brain, such as the degree of verification of a condition in fuzzy
logic.
---
I've written a short [introduction to fuzzy logic](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=kz2aIc8AAAAJ&citation_for_view=kz2aIc8AAAAJ:eQOLeE2rZwMC) that goes into a bit more details but should be very accessible.
Upvotes: 7 [selected_answer]<issue_comment>username_3: Fuzzy logic is based on regular boolean logic. Boolean logic means you are working with truth values of either true or false (or 1 or 0 if you prefer). Fuzzy logic is the same apart from you can have truth values that are in-between true and false, which is to say, you are working with any number between 0 (inclusive) and 1 (inclusive). The fact that you can have a 'partially true and partially false' truth value is where the word "fuzzy" comes from. Natural languages often use fuzzy logic like "that balloon is red" meaning that balloon could be any colour that is similar enough to red, or "the shower is warm". Here is a rough diagram for how "the temperature of the shower is warm" could be represented in terms of fuzzy logic (the y axis being the truth value and the x-axis being the temperature):
[](https://i.stack.imgur.com/G7szY.png)
Fuzzy logic can be applied to boolean operations such as **and**, **or**, and **not**. Note that you can define the fuzzy logic operations in different ways. One way is with the min and max functions which return the lessermost and greatermost values of the two values inputted respectively. This would work as such:
```
A and B = min(A,B)
A or B = max(A,B)
not A = 1-A
(where A and B are real values from 0 (inclusive) to 1 (inclusive))
```
When defined like this they are called the **Zadeh operators**.
Another way would be to define **and** as the first argument times the second argument, which yields different outputs for the same inputs as the Zadeh **and** operator (`min(0.5,0.5)=0.5, 0.5*0.5=0.25`). Then other operators are derived based on the **and** and **not** operators. This would work as such:
```
A and B = A*B
not A = 1-A
A or B = not ((not A) and (not B)) = 1-((1-A)*(1-B)) = 1-(1-A)*(1-B)
(where A and B are real values from 0 (inclusive) to 1 (inclusive))
```
You can then use the three "basic fuzzy logic operations" to build all other "fuzzy logic operations", just like you can use the three "basic boolean operations" to build all other "boolean logic operations".
Note that the latter definition of the three basic operations is more in line with probability theory, so could be considered the more natural choice.
Sources:
[Fuzzy logic wikipedia](https://en.wikipedia.org/wiki/Fuzzy_logic),
[Boolean algebra wikipedia](https://en.wikipedia.org/wiki/Boolean_algebra),
[Explanation of fuzzy logic on Youtube](https://www.youtube.com/watch?v=r804UF8Ia4c)
Note: if anyone could suggest some more reliable sources in the comments, I will happily add them to the list (I understand that the current ones aren't too reliable).
Upvotes: 5 <issue_comment>username_4: Why is it useful?
Many things we don't know for sure. We estimate and are often uncertain, but nearly never 100% sure. It may seem like a weakness, but because of this fuzzy approach we can function in this complex world and even behave quite intelligently. Hence it's a way to simplify things. And it gives you some leeway to fill the gaps appropriately, e.g. to adapt to slightly varying situations.
P.S.: In natural language we express this with quantitive terms like more, less, nearly, rather, immense and so on. But quantifying things is hard for us.
Upvotes: 1 <issue_comment>username_5: It is making deductions based on probability and statistics, like humans make decisions all the time. We are never 100% sure the decision we have made is the right one but there is always some doubt present. Ai will definitely need to use it in some form.
Upvotes: 1 <issue_comment>username_6: Fuzzy Logic is a way of dealing with uncertainties, which is something that computers don't do naturally which human do very well. The way we instantly think of dealing with things and the way that computers tend to deal with certain things is 'True' or 'False', or '1 or '0'. For example, you might classify someone is alive as alive ('True') or has passed away ('False'). We only have two options, there is no inbetween. With fuzzy logic, instead of going with 'True' or 'False', between that, we have what's called a degree of truth. So for example, when we look out the window, we might say "It's a bit cloudy today, maybe it's 0.5 'nice day' or 0.7 'nice day'". So essentially, with fuzzy logic we always have grey areas which vary from person-to-person.
Source: <https://youtu.be/r804UF8Ia4c>
Upvotes: 1
|
2016/08/02
| 2,572
| 10,786
|
<issue_start>username_0: The [Turing Test](https://en.wikipedia.org/wiki/Turing_test) was the first test of artificial intelligence and is now a bit outdated. The [Total Turing Test](https://en.wikipedia.org/wiki/Turing_test#Total_Turing_test) aims to be a more modern test which requires a much more sophisticated system. What techniques can we use to identify an artificial intelligence (weak AI) and an [artificial general intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) (strong AI)?<issue_comment>username_1: The problem of the Turing Test is that it tests the machines ability to resemble humans. Not necessarily every form of AI has to resemble humans. This makes the Turing Test less reliable. However, it is still useful since it is an actual test. It is also noteworthy that there is a prize for passing or coming closest to passing the Turing Test, the [Loebner Prize](https://en.wikipedia.org/wiki/Loebner_Prize).
The intelligent agent definition of intelligence states that an agent is intelligent if it acts so to maximize the expected value of a performance measure based on past experience and knowledge. (paraphrased from [Wikipedia](http://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Intelligent_agent_definition)). This definition is used more often and does not depend on the ability to resemble humans. However, it is harder to test this.
Upvotes: 3 <issue_comment>username_2: The rhetorical point of the Turing Test is that it places the 'test' for 'humanity' in *observable outcomes*, instead of in *internal components*. If you would behave the same in interacting with an AI as you would with a person, how could *you* know the difference between them?
But that doesn't mean it's reliable, because intelligence has many different components and there are many sorts of intellectual tasks. The Turing Test, in some respects, is about the reaction of people to behavior, which is not at all reliable--remember that many people thought [ELIZA](https://en.wikipedia.org/wiki/ELIZA), a very simple chatbot, was an excellent listener and got deeply emotionally involved very quickly. It calls to mind the [Ikea commercial about throwing out a lamp](https://www.youtube.com/watch?v=dBqhIVyfsRg), where the emotional attachment comes *from the human viewer* (and the music), rather than from the lamp.
Turing tests for specific economic activities are much more practically interesting--if one can write an AI that replaces an Uber driver, for example, what that will imply is much clearer than if someone can create a conversational chatbot.
Upvotes: 4 <issue_comment>username_3: There are many definitions of Artificial Intelligence out in the wild. All these definitions are part of one (or more) of the areas. There are four main domains, and the picture below will shed some light over this.
[](https://i.stack.imgur.com/m7ZlO.png)
Turing Test revolves around the left side of the cardinality, which is mostly concerned with how humans think or act. But, we know that this is just not all. Turing Test has not much to offer when it comes to what AI is in a general sense.
Turing Test, as the Wikipedia states, was created to test machines exhibiting behaviour equivalent or indistinguishable from that of a human. Artificial Intelligence is much more than what humans can do or how they act. There are many human acts that are considered unintelligent and sometimes inhuman too.
[Chinese Room Argument](https://en.wikipedia.org/wiki/Chinese_room) focuses on something very important when it comes to **"Consciousness v/s Simulation of Consciousness"**. John Searle argued there that it is possible for a machine (or human) to follow a huge number of predefined rules (algorithm), in order to complete the task, without thinking or possessing the mind. Weak AIs are good at simulating the ability to understand but, don't really understand what they are doing. They don't exhibit **"Self-Awareness"** and don't form representation about themselves. **"I want that v/s I know I want that"** are two different things.
As Theory of Mind states that a good AI should not just form representation about the world it is working on, but also about other agents and entities in the world. This two concepts of *self-awareness and theory of mind* draw a thin line between weak and strong AI.
When it comes to the Turing Test, it fails on many grounds and so does the Total Turing Test, which adds another layer to the test. Most of the researchers believe that Turing Test is just a distraction from the main goal, something that hinders them from fruitful work. Consider this, suppose you ask a difficult arithmetic problem in order to distinguish between human and machine. If the machine wants to pretend it is human then it will lie. This is not what we want. Going for the Turing Test sets the upper bound to the AI that can be created. Also making AI act and behave like humans is not a very good idea. Humans are not very good at making right decisions all the time. This is the reasons why we read about wars in our history books. Decisions which we make are often biased, have selfish origins, etc. We don't want an AI to come with all those things.
I don't think there is one test to test an AI. This is because AI has many definitions, many types. Whether an AI is weak or strong can be tagged while looking for answers to questions like, "I want that v/s I know I want that", "Who am I and what exactly I am doing (from machine's perspective)", plus some other questions I mentioned above.
Upvotes: 2 <issue_comment>username_4: The classical Turing Test certainly does have limitations. Because I don't see it mentioned here yet, I'll suggest you read about [The Chinese Room](https://en.wikipedia.org/wiki/Chinese_room), which is one of the most commonly cited reasons why the Turing Test indeed falls short of ascertaining true 'consciousness'. However, I'd also note that Turing himself, [in the original paper that proposed the Turing Test](http://www.loebner.net/Prizef/TuringArticle.html), explicitly acknowledged himself that the test **was not a test to detect consciousness**:
>
> I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
>
>
> The new form of the problem can be described in terms of a game which we call the 'imitation game."
>
>
>
This imitation game is the test that we now know today (and also the inspiration for the name of a recent feature film starring <NAME> and <NAME>).
Upvotes: 3 <issue_comment>username_5: *Is the Turing Test, or any of its variants, a reliable test of artificial intelligence?*
**Myopia**
Yes, if one defines the term Artificial Intelligence in terms of Alan Turing's Imitation Game or one of its variants. The approach may be, at the same time, both valid and very limited as a definition of intelligence as people interpreted the word before AI emerged.
**Proven Intelligence**
Consequently, there are a large number of alternative approaches to measuring intelligence, artificial or otherwise.
* Becoming a chess grand master
* Authoring a winning chess program
* Receiving a highly selective international award
* Creating a strategy that wins a war or a peace
* Overcoming the thousands of rounds of elimination in business or politics to become President
* Authoring brilliant articles, papers, screenplays, lectures, speeches, books, or poems that generate significant human paradigm shifts
* Showing genius level results in a Mensa test
* Becoming one of the most wealthy people in the world
**Normal Measurement of Normal Intelligence**
But these are measurements of exceptional intelligence of some kind, mostly because the leaders in these areas have reliably applied intelligence over multiple domains in such ways that led to remarkable success through multiple real life scenarios. The reliability is an attribute of the person possessing the intelligence, not the test of intelligence itself.
These are more mundane, yet perhaps more valid and reliable, measures of intelligence.
* Raising healthy and loving children as verifiable through the careful interviewing of friends and associates of the members of the family
* Repeated and successful remedy of many conditions of varying types that were once identified as broken in some tangible and measurable way and found to be measurably corrected as a result of the application of intelligence comprehension, analysis, and remedial action
* Conversational intelligence as measurable through the participants in conversation attributing their own success to the ideas and examples set by the conversationalist
**What Are the Truly Desired End Goals?**
Perhaps the primary characteristic of the Turing Test is that it is artificial. If artificial intelligence is what we want from AI software, then that is what we will receive. However, it is likely we want something either considerably more or considerably less.
We want more in that it would be nice of some computers could be our friend, our mentor, and an unpaid employee with exceptional abilities leading to our personal success in terms of income, influence, popularity, or legacy.
We want less in that we want some computers to do domain specific tasks and remain as fully subservient tools, perhaps with some personality and warmth, like a ship or some other complex device we give human names, yet without the unpredictability of the far reaching capabilities of human intelligence.
Upvotes: 2 <issue_comment>username_6: It depends on how the test is given. For example, when people claimed that a machine had successfully passed the Turing Test a few years ago, the criteria was pretty weak. It only had to fool 30% of the people for 5 minutes. That's not much of a test. To put this in perspective you probably wouldn't detect schizophrenia, autism, learning disabilities, or dementia with this criteria.
In spite of the hype, the current AI's can be detected 100% of the time using fairly simple questions.
Upvotes: 2
|
2016/08/02
| 1,609
| 6,905
|
<issue_start>username_0: I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point?<issue_comment>username_1: The concept of "the singularity" is when machines outsmart the humans. Although Stephen Hawking opinion is that this situation is inevitable, but I think it'll be very difficult to reach that point, because every A.I. algorithm needs to be programmed by humans, therefore it would be always more limited than its creator.
We would probably know when that point when humanity will lose control over Artificial Intelligence where super-smart AI would be in competition with humans and maybe creating more sophisticated intelligent beings occurred, but currently, it's more like science fiction (aka [Terminator's Skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator))).
The risk could involve killing people (like self-flying war *drones* making their own decision), destroying countries or even the whole planet (like A.I. connected to the nuclear weapons (aka [WarGames](https://en.wikipedia.org/wiki/WarGames) movie), but it doesn't prove the point that the machines would be smarter than humans.
Upvotes: 2 <issue_comment>username_2: The [technological singularity](https://en.wikipedia.org/wiki/Technological_singularity) is a theoretical point in time at which a *self-improving* [artificial general intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't.
The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively *unpredictable*. Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens.
However, in order for the singularity to take place, AGI needs to be developed, and [whether that is possible is quite a hot debate](https://en.wikipedia.org/wiki/Artificial_general_intelligence#Feasibility) right now. Moreover, an algorithm that creates *superhuman intelligence* (or [*superintelligence*](https://en.wikipedia.org/wiki/Superintelligence)) out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an [*intelligence explosion*](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve the said challenge.
Also, there are related theories involving [*machines taking over humankind*](https://en.wikipedia.org/wiki/AI_takeover) and all of that sci-fi narrative. However, that's unlikely to happen, if [Asimov's laws](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) are followed appropriately. [Even if Asimov's laws were not enough](https://www.youtube.com/watch?app=desktop&v=7PKx3kS7f4A), a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that.
Upvotes: 6 [selected_answer]<issue_comment>username_3: The "singularity," viewed narrowly, refers to a point at which economic growth is so fast that we can't make useful predictions about what the future past that point will look like.
It's often used interchangeably with "intelligence explosion," which is when we get so-called Strong AI, which is AI that is intelligent enough to understand and improve itself. It seems reasonable to expect that the intelligence explosion would immediately lead to an economic singularity, but the reverse is not necessarily true.
Upvotes: 1 <issue_comment>username_4: The singularity, in the context of AI, is a theoretical event whereby an intelligent system with the following criteria is deployed.
1. Capable of improving the range of its own intelligence or deploying another system with such improved range
2. Willing or compelled to do so
3. Able to do so in the absence of human supervision
4. The improved version sustains criteria (1) through (3) recursively
By induction, the theory then predicts that a sequence of events will be generated with a potential rate of intelligence increase that may vastly exceed the potential rate of brain evolution.
How obligated this self-improving entity or population of procreated entities would be to preserve human life and liberty is indeterminate. The idea that such an obligation can be part of an irrevocable software contract is naive in light of the nature of the capabilities tied to criteria (1) through (4) above. As with other powerful technology, the risks are as numerous and far-reaching as the potential benefits.
Risks to humanity do not require intelligence. There are other contexts to the use of the term singularity, but they are outside of the scope of this AI forum but may be worth a brief mention for clarity. Genetic engineering, nuclear engineering, globalization, and basing an international economy on a finite energy source being consumed thousands of times faster than it arose in the earth — These are other examples of high-risk technologies and mass trends that pose risks as well as benefits to humanity.
Returning to AI, the major caveat in the singularity theory is its failure to incorporate probability. Although it may be possible to develop an entity that conforms to criteria (1) through (4) above, it may be improbable enough so that the first event occurs long after all the current languages spoken on Earth are dead.
On the other extreme of the probability distribution, one could easily argue that there is a nonzero probability that the first event already occurred.
Along those lines, if a smarter presence where already existent on the Internet, how likely would it be that it would find it in its best interest to reveal itself to the lower human beings. Do we introduce ourselves to a passing maggot?
Upvotes: 2
|
2016/08/02
| 2,033
| 7,688
|
<issue_start>username_0: I've seen emotional intelligence defined as the capacity to be aware of, control, and express one's emotions, and to handle interpersonal relationships judiciously and empathetically.
1. What are some strategies for artificial intelligence to begin to tackle this problem and develop emotional intelligence for computers?
2. Are there examples where this is already happening to a degree today?
3. Wouldn't a computer that passes a Turing test necessarily express emotional intelligence or it would be seen as an obvious computer?
Perhaps that is why early programs that pass the test represented young people, who presumably have lower emotional intelligence.<issue_comment>username_1: I think your question fits nowadays more in the field of [Human-Robot Interaction](https://en.wikipedia.org/wiki/Human%E2%80%93robot_interaction), which relies largely on [vision](http://nordicapis.com/20-emotion-recognition-apis-that-will-leave-you-impressed-and-concerned/) for recognition of gestures and follow movements, as well as *soft, natural* movements as a response. Note that the movements of the face and hands belong to the most complex tasks, involving *many* muscles at a time.
I strongly recommend the film [Plug & Pray](http://www.plugandpray-film.de/en/trailer.html) to have an idea of what people are researching in this area.
You may also find Eliza (which you can try [here](https://web.njit.edu/%7Eronkowit/eliza.html)) interesting. It is classical in the history of AI and pretends to mimic an analyst (psychology). (I am thinking of Eliza not because of its emotional intelligence, but because it was [apparently taken seriously](http://www.alicebot.org/articles/wallace/eliza.html) by a couple of humans. Could this be taken as a sort of (approved) Turing test? What does it say about the humans it met?)
On the *purely human* end of the scale, I sometimes wonder about our (my) emotional intelligence myself. Would I want to implement such an intelligence in an artificial agent at all?
Upvotes: 3 <issue_comment>username_2: Architectures for recognizing and generating emotion are typically somewhat complex and don't generally have short descriptions, so it's probably better to reference the literature rather than give a misleading soundbite:
Some of the early work in *affective computing* was done by [<NAME>](https://web.media.mit.edu/%7Epicard/). There is a [research group at MIT](http://affect.media.mit.edu/) specializing in this area.
Some of the more developed architectural ideas are due to <NAME>.
A pre-publication draft of his book, *The Emotion Machine*, is available via [Wikipedia](https://en.wikipedia.org/wiki/The_Emotion_Machine).
Emotional intelligence would certainly seem to be a necessary component of passing the Turing test - indeed, in the original Turing test essay in [Computing Machinery and Intelligence](https://academic.oup.com/mind/article/LIX/236/433/986238) implied some degree of "Theory of Mind" about Mr. Pickwick's preferences:
>
> Yet Christmas is a Winter’s day, and I do not think Mr. Pickwick would mind the comparison.
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_3: [Emotions](https://en.wikipedia.org/wiki/Emotion) aren't something that you can implement - they're very complex. However, you can attempt to mimic them. Human emotions are closely related to conscious experience characterized by intense mental activity, which is based on interpretation of events.
Recent brain studies (including research in cognitive psychology and neurophysiology) suggests that human emotional assessment of every action or event plays an important role in human mental processes.
The recent [2016 Annual Meeting of the BICA Society](http://bica2016.bicasociety.org/) brought together scientists from around the world to approach principles and mechanisms of human thought to create biologically inspired AI.
For example, in Samsonovich's (a professor in the Cybernetics Department at the [MEPhI](https://en.wikipedia.org/wiki/National_Research_Nuclear_University_MEPhI)) proposal, the idea is to test AI in computer games which involves actions with emotional content, where AI may engage with players in different types of social relationships (such as trust, subordination or leadership).
<NAME> of the [ICT](https://en.wikipedia.org/wiki/Institute_for_Creative_Technologies), invented virtual characters capable of identifying and expressing emotions by communicating with humans in their natural language based on the situations where for example AI can deceive a human to achieve the desired result. The effect is obviously not achieved by re-creating human consciousness, but by achieving statistically adjusting parameters.
Researchers from the Institute of Cyber Intelligence Systems in MEPhI are hoping to be able to create in the near future virtual beings which are capable of planning, setting goals and establishing social relationships with humans, also by possessing both emotional and narrative intelligence which can interpret the context of events.
Source: [Researcher proposes social emotions test for artificial intelligence](http://phys.org/news/2016-07-social-emotions-artificial-intelligence.html)
Upvotes: 2 <issue_comment>username_4: So you may be familiar with Word2Vec, (W2V) which as [Wikipedia describes](http://en.wikipedia.org/wiki/Word2vec)1 "captures the linguistic contexts of words" using vector arithmetic. For example, subtract 'Paris' from 'France' and add 'Italy' and you get 'Rome'.
What you need is something like a Sentiment2Vec (S2V) that captures the similarities between emotional transitions. Something like: subtract 'fear' from 'sadness', add 'joy' and you get 'hope'. Or: subtract 'sting' from 'papercut', add 'smashed' and you get 'throbbing'.
The catch is that you don't have an easily accessible corpus of emotional contexts to train with, like you have with words. If you had a million hours of fMRI - mapping the transitions between emotions in hundreds of subjects - then you could use that data to build an S2V. You probably don't have that data though.
In the mean time, you could just build a W2V that specializes in sentiment. You could even try to use a current sentiment analysis engine to bootstrap it. Perhaps if you read enough text that says "I got a papercut and it stings" and "I smashed my finger and it's throbbing" then you could eventually produce an S2V. Children's books often use explicit language regarding emotional context ("this made the boy feel sad").
But words are still a far cry from the experiential context that a connectome map would provide. To test whether you have something useful or not, you might want to implement your S2V in a mouse foraging simulation - see whether it produces typical behavior and if any cooperative or competitive dynamics can organically grow out of your S2V.
Some further info on the subject:
In 2014, [Glasgow University claimed](http://www.bbc.com/news/uk-scotland-glasgow-west-26019586)2 that there are four primary emotions: happiness, sadness, fear and anger.
[This website](http://changingminds.org/explanations/emotions/basic%20emotions.htm)3 provides nice (if somewhat short) hierarchical breakdown of secondary and tertiary emotions under primary emotions.
---
**References**
1: [en.wikipedia.org/wiki/Word2vec](http://en.wikipedia.org/wiki/Word2vec)
2: [www.bbc.com/news/uk-scotland-glasgow-west-26019586](http://www.bbc.com/news/uk-scotland-glasgow-west-26019586)
3: [changingminds.org/explanations/emotions/basic%20emotions.htm](http://changingminds.org/explanations/emotions/basic%20emotions.htm)
Upvotes: 1
|
2016/08/02
| 885
| 3,995
|
<issue_start>username_0: Since human intelligence presumably is a function of a natural genetic algorithm in nature, is using a genetic algorithm in a computer an example of artificial intelligence? If not, how do they differ? Or perhaps some are and some are not expressing artificial intelligence depending upon the scale of the algorithm and what it evolves into?<issue_comment>username_1: This is probably more a question of philosophy than anything. In terms of how things are commonly defined, I'll say "yes, genetic algorithms are part of AI". If you pick up a comprehensive book on artificial intelligence, there will probably be a chapter on genetic algorithms (or more broadly, evolutionary algorithms).
One area that has been extensively studied in the past is the idea of using genetic algorithms to train neural networks. I don't know if people are still actively researching this topic or not, but it at least illustrates that GA's are part of the overall rubric of AI in one regard.
Upvotes: 2 <issue_comment>username_2: The notion of genetics used in Genetic Algorithms (GAs) is a *very* stripped down version relative to genetics in nature, essentially consisting of a population of 'genes' (representing solutions to some predefined problem) subject to `survival of the fittest' during iterated application of recombination and mutation.
Nowadays, the term 'Computational Intelligence' (CI) tends to be used to describe computational techniques intended to produce `the appearance of intelligence by *any* computational means', rather than specifically attempting to mimic the mechanisms that are believed to give rise to human (or animal) intelligence.
That said, the distinction between CI and AI is not so hard and fast, and arguably arose during the `AI Winter' when the term AI was out of fashion.
Upvotes: 2 <issue_comment>username_3: Human intelligence is **not** an example of natural genetic algorithms.
Genetic algorithms have collections of solutions that are collided with each other to make new solutions, eventually returning the best solution. Human intelligence is a network of neurons doing information processing, and almost all of it doesn't behave the same way.
But that something doesn't behave in the same way that human intelligence does doesn't mean that it's not an AI algorithm; I would include 'genetic algorithms' as a numerical optimization technique, and since optimization and intelligence are deeply linked any numerical optimization technique could be seen as an AI technique.
Upvotes: 2 <issue_comment>username_4: * An ability that is commonly attributed to intelligence is **problem solving**.
* Another one is **learning** (improving itself from experience).
* Artificial intelligence can be defined as "replicating intelligence, or parts of it, at least in appearance, inside a computer" (dodging the definition of intelligence itself).
* Genetic algorithms are computational **problem solving** tools that find and improve solutions (they **learn**).
Thus, genetic algorithms are a kind of artificial intelligence.
Regarding scale, I don't see it as an important factor for defining G.A. as A.I or not. The same way we can simply classify different living forms as more or less intelligent instead of just saying intelligent or not intelligent.
Finally, let's just make an important distinction: our brains are the product of natural selection, but the brains themselves don't use the same principle in order to achieve intelligence.
Upvotes: 4 [selected_answer]<issue_comment>username_5: To answer this question, you must first know what is intelligence, and since there is no clear line between intelligent and not, this question is more philosophical rather than technical.
In my opinion, intelligence is the ability to define a problem and find a way to solve it using memory and reasoning. Since a genetic algorithm follows this structure, I would say that it falls under the category of artificial intelligence.
Upvotes: 1
|
2016/08/02
| 2,982
| 11,644
|
<issue_start>username_0: These two terms seem to be related, especially in their application in computer science and software engineering.
* Is one a subset of another?
* Is one a tool used to build a system for the other?
* What are their differences and why are they significant?<issue_comment>username_1: Machine learning is a subset of artificial intelligence. Roughly speaking, it corresponds to its learning side. There is no "official" definitions, boundaries are a bit fuzzy.
Upvotes: 5 <issue_comment>username_2: **Machine learning** has been defined by many people in multiple (often similar) ways [[1](http://noiselab.ucsd.edu/ECE228/Murphy_Machine_Learning.pdf), [2](http://www.cs.ubbcluj.ro/%7Egabis/ml/ML-books/McGrawHill%20-%20Machine%20Learning%20-Tom%20Mitchell.pdf)]. One definition says that machine learning (ML) is the field of study that gives computers the *ability to learn* without being explicitly programmed.
Given the above definition, we might say that machine learning is geared towards problems for which we have (lots of) data (experience), from which a program can learn and can get better at a task.
**Artificial intelligence** has many more aspects, where machines may not get better at tasks by learning from data, but may exhibit *intelligence* through rules (e.g. expert systems like [Mycin](https://en.wikipedia.org/wiki/Mycin)), [logic](https://silp.iiita.ac.in/wp-content/uploads/PROLOG.pdf) or algorithms, e.g. path-finding).
The book [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/) shows more research fields of AI, like *Constraint Satisfaction Problems*, *Probabilistic Reasoning* or *Philosophical Foundations*.
Upvotes: 6 <issue_comment>username_3: Many terms have 'mostly' the same meanings, and so the differences are just in emphasis, perspective, or historical descent. People disagree as to which label refers to the superset or the subset; there are people who will call AI a branch of ML and people who will call ML a branch of AI.
I typically hear Machine Learning used as a form of 'applied statistics' where we specify a learning problem in enough detail that we can just feed training data into it and get a useful model out the other side.
I typically hear Artificial Intelligence as a catch-all term to refer to any sort of intelligence embedded in the environment or in code. This is a very expansive definition, and others use narrower ones (such as focusing on artificial *general* intelligence, which is not domain-specific). (Taken to an extreme, my version includes thermostats.)
Upvotes: 3 <issue_comment>username_4: The machine learning is a sub-set of artificial intelligence which is only a small part of its potential. It's a specific way to implement AI largely focused on statistical/probabilistic techniques and evolutionary techniques.[Q](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning/answer/Phillip-Rhodes)
### Artificial intelligence
Artificial intelligence is '**the theory and development of computer systems able to perform tasks normally requiring human intelligence**' (such as visual perception, speech recognition, decision-making, and translation between languages).
We can think of AI as the concept of non-human decision making[Q](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning/answer/Yuval-Ariav) which aims to simulate cognitive human-like functions such as problem-solving, decision making or language communication.
### Machine learning
Machine learning (ML) is basically **a learning through doing** by the implementation of build models which can predict and identify patterns from data.
According to Prof. [<NAME>](http://www.cs.colby.edu/srtaylor/) of Computer Science and her [lecture paper](http://cs.colby.edu/courses/S15/cs251/LectureNotes/Lecture_15_MLandDMintro_03_09_2015.pdf), and also [Wikipedia page](https://en.wikipedia.org/wiki/Learning#Machine_learning), 'machine learning is a branch of artificial intelligence and **it's about construction and study of systems that can learn from data**' (like based on the existing email messages to learn how to distinguish between spam and non-spam).
According to [Oxford Dictionaries](http://www.oxforddictionaries.com/definition/english/machine-learning), the machine learning is '**the capacity of a computer to learn from experience**' (e.g. modify its processing on the basis of newly acquired information).
We can think ML as computerized pattern detection in the existing data to predict patterns in future data.[Q](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning/answer/Yuval-Ariav)
---
In other words, **machine learning involves the development of self-learning algorithms** and **artificial intelligence involves developing systems or software** to mimic human to respond and behave in a circumstance.[Quora](https://www.quora.com/What-are-the-main-differences-between-artificial-intelligence-and-machine-learning/answer/Sakthi-Dasan-2)
Upvotes: 4 <issue_comment>username_5: ### Artificial intelligence
According to the book [Artificial Intelligence: A Modern Approach](https://www.cin.ufpe.br/%7Etfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdf#page=27) (section 1.1), artificial intelligence (AI) has been defined in *multiple* ways, which can be organized into 4 categories.
1. **Thinking Humanly**
2. **Thinking Rationally**
3. **Acting Humanly**
4. **Acting Rationally**
Figure 1.1 (of the same book) contains 8 definitions (by renowned people like Bellman, Winston or Kurzweil).
[](https://i.stack.imgur.com/O9J5Q.png)
Each box contains 2 similar definitions (i.e. both fall into the same category). These definitions vary along 2 dimensions. The definitions in the top row are concerned with **thought-processes** and **reasoning**, while the ones in the bottom are concerned with **behaviour**. The definitions on the left are associated with **human intelligence**, while the ones on the right with an idealized version of intelligence, which the authors of the AIMA book call **rationality**. So, for example, the definitions in the top-left corner are based on **thinking humanly**, while the definitions on the bottom-right corner are based on **acting rationally**.
There is also a definition of AI by [<NAME>](http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html), who is [one of the official founders of the AI field in 1956](https://en.wikipedia.org/wiki/Dartmouth_workshop).
>
> It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
>
>
>
### Machine learning
There have been also multiple (similar) definitions of machine learning (ML). For example, <NAME>, in section 1.1. of his book [Machine Learning](http://www.cs.ubbcluj.ro/%7Egabis/ml/ML-books/McGrawHill%20-%20Machine%20Learning%20-Tom%20Mitchell.pdf#page=14), defines machine learning as follows.
>
> A computer program is said to learn from experience $E$ with respect
> to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$.
>
>
>
### What is the difference between AI and ML?
ML is a subfield of AI, which is data-oriented (or experience-driven). AI is not just ML, but it's also composed of Natural Language Processing, and other subfields.
Upvotes: 4 <issue_comment>username_6: Machine learning is a subfield of artificial intelligence, as the following diagram (taken from [this blog post](https://www.linkedin.com/pulse/how-artificial-intelligence-different-from-machine-learning-singh)) illustrates.
[](https://i.stack.imgur.com/5i2z7.png)
Upvotes: 4 <issue_comment>username_7: Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably. They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.
Machine learning is a core sub-area of artificial intelligence; it enables computers to get into a mode of self-learning without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change, and develop by themselves.
Upvotes: 1 <issue_comment>username_8: In simple words, **Artificial intelligence** is a field of science that is trying to mimic humans or other animals behavior.
**Machine Learning** is one of the key tool/technology behind Artificial intelligence.
[](https://i.stack.imgur.com/OkLbE.jpg)
Upvotes: 2 <issue_comment>username_9: First of all, I encountered the term MachineLearning much more in my Business Intelligence classes than in my AI classes.
My AI Professor <NAME> would have put it that way: (after having a long speech about what intelligence is, how it can be defined, different types of intelligence, etc.). ML is more static and "dumb", unaware of its physical environment and not made to interact with it, or only on an abstract basis. AI has a certain awareness of its environment and interacts with it autonomously, making thereby autonomous decisions with feedback loops.
From that point of view, username_5 Answer would be probably the closest.
Besides that, of course, ML is a subset of AI.
Machine Learning is not real intelligence (imho), it's mostly human intelligence reflected in logical algorithms, and as my Business Intelligence Prof would put it: about data and its analysis. Machine Learning has a lot of supervised algorithms which actually do need humans to support the learning process by telling what's right and what's wrong, so they're not independent. And once they're applied, algorithms are mostly static until humans readjust them.
In ML you mostly have black boxes designs and the main aspect is data. Data comes in, Data gets analyzed ("Intelligently"), Data goes out, and Learning most times applies to a pre-implementation/Learning fase. In most cases ML doesn't care about the environment a machine is in, it's about data.
AI instead is about mimicking human or animal intelligence. Following my Prof's approach, AI is not necessarily about self-consciousness but about interaction with the environment, so to build AI you need to give the machine sensors to perceive the environment, a sort of intelligence able to keep on learning, and elements to interact with the environment (arms, etc.). The interaction should happen in an autonomous way and ideally, as in humans, learning should be an autonomous, ongoing process.
So a drone that scans fields in a logical scheme for colour patterns to find weeds within crops would be more ML. Especially if the data is later analyzed and verified by humans or the algorithm used is that a static algorithm with built-in "intelligence" but not capable of rearranging or adapting to its environment.
A drone that flies autonomously, charges itself up when the battery's down, scans for weeds, learns to detect unknown ones and rips them out by itself and brings them back for verification, would be AI...
Upvotes: 0
|
2016/08/02
| 2,955
| 11,403
|
<issue_start>username_0: What aspects of quantum computers, if any, can help to further develop Artificial Intelligence?<issue_comment>username_1: Quantum computers can help further develop A.I. algorithms and solve the problems to the extent of our creativity and ability to define the problem. For example breaking cryptography can take seconds, where it can takes thousands of years for standard computers. The same with artificial intelligence, it can predict all the combinations for the given problem defined by algorithm. This is due to superposition of multiple states of quantum bits.
Currently, quantum computers are still in the early stages of development and can perform complex calculation. There are already technologies like [D-Wave](https://en.wikipedia.org/wiki/D-Wave_Systems) systems which are used by Google and NASA for complex data analysis, using Multi-Qubit type quantum computers for [solving NSE fluid dynamics problems](https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations) of interest or global surveillance for military purposes, and many more which we're not aware.
Currently there are only a few quantum computers available to the public, like [IBM Quantum Experience](http://www.research.ibm.com/quantum/) (the world’s first quantum computing platform delivered via the IBM Cloud), but it's programming on quantum logic gates levels, so we're many years behind creating artificial intelligence available to public. There are some [quantum computing languages](https://en.wikipedia.org/wiki/Quantum_programming) such as QCL, Q or Quipper, but I'm not aware any libraries which can provide artificial intelligence frameworks. It doesn't mean it's not there, and I'm sure huge companies and governments organisations are using it for their agenda to outcome the competition (like financial market analysis, etc.).
Upvotes: 2 <issue_comment>username_2: Quantum computers are super awesome at matrix multiplication, [with some limitations](http://twistedoakstudios.com/blog/Post8887_what-quantum-computers-do-faster-with-caveats). Quantum superposition allows each bit to be in *a lot* more states than just zero or one, and quantum gates can fiddle those bits in many different ways. Because of that, a quantum computer can process a lot of information at once for certain applications.
One of those applications is the [Fourier transform](http://algorithmicassertions.com/quantum/2014/03/07/Building-your-own-Quantum-Fourier-Transform.html), which is useful in a lot of problems, like [signal analysis](https://dsp.stackexchange.com/q/69) and array processing. There's also [Grover's quantum search algorithm](http://twistedoakstudios.com/blog/Post2644_grovers-quantum-search-algorithm), which finds the single value for which a given function returns something different. If an AI problem can be expressed in a mathematical form [amenable to quantum computing](http://algorithmicassertions.com/quantum/2014/04/27/The-Not-Quantum-Laplace-Transform.html), it can receive great speedups. Sufficient speedups could transform an AI idea from "theoretically interesting but insanely slow" to "quite practical once we get a good handle on quantum computing."
Upvotes: 6 [selected_answer]<issue_comment>username_3: Until we can make a quantum computer with a lot more qubits, the potential to further develop AI will remain just that.
D-Wave (which has just made a 2,000+ qubit system around 2015) is an *adiabatic quantum computer*, not a general-purpose quantum computer. It is restricted to certain optimization problems (at which its effectiveness [has reportedly been doubted](https://en.wikipedia.org/wiki/D-Wave_Systems#Reception) by one of the originators of the theory on which it is based).
Suppose that we could build a 32 qubit general-purpose quantum computer (twice as big as current models, as far as I'm aware). This would still mean that only 232 possibilities exist in superposition. This is a space small enough to be explored exhaustively for many problems. Hence, there are perhaps not so many problems for which any of the known quantum algorithms (e.g. [Shor](https://en.wikipedia.org/wiki/Shor%27s_algorithm), [Grover](https://en.wikipedia.org/wiki/Grover%27s_algorithm)) would be useful for that number of bits.
Upvotes: 3 <issue_comment>username_4: **Direct Answer to Your Question**:--
The field where quantum computing and A.I. intersect is called **quantum machine learning**.
1. A.I. is a developing field, with some background (ala McCarthy of LISP fame).
2. Quantum computing is a virgin field that is largely unexplored.
A particular type of complexity interacts with another type of complexity to create a very rich field.
Now combine (1) and (2), and you end up with even more uncertainty; the technical details shall be explored in this answer.
Google Explains Quantum Computing in One Simple Video: [Google and NASA's Quantum Artificial Intelligence Lab](https://www.youtube.com/watch?v=CMdHDHEuOUE)
---
**Body**:--
IBM is an authority:--
[IBM: Quantum Computers Could Be Useful, But We Don't Know Exactly How](https://futurism.com/the-byte/ibm-problems-quantum-computing)
Quantum machine learning is an interesting phenomenon. This field studies the intersection between quantum computing and machine learning.
(<https://en.wikipedia.org/wiki/Quantum_machine_learning>)
>
> "While machine learning algorithms are used to compute immense quantities of data, quantum machine learning increases such capabilities intelligently, by creating opportunities to conduct analysis on quantum states and systems." Wikipedia contributors. — "Quantum machine learning." *Wikipedia, The Free Encyclopedia*. Wikipedia, The Free Encyclopedia, 7 Oct. 2019. Web. 11 Oct. 2019.
>
>
>
---
**Technical Mirror**:--
This particular section on the implementations is worth noting:--
(<https://en.wikipedia.org/wiki/Quantum_machine_learning#Implementations_and_experiments>)
>
> " ... This dependence on data is a powerful training tool. But it comes with potential pitfalls. If machines are trained to find and exploit patterns in data then, in certain instances, they only perpetuate the race, gender or class prejudices specific to current human intelligence.
>
>
> But the data-processing facility inherent to machine learning also has the potential to generate applications that can improve human lives. 'Intelligent' machines could help scientists to more efficiently detect cancer or better understand mental health.
>
>
> Most of the progress in machine learning so far has been classical: the techniques that machines use to learn follow the laws of classical physics. The data they learn from has a classical form. The machines on which the algorithms run are also classical.
>
>
> We work in the emerging field of quantum machine learning, which is exploring whether the branch of physics called quantum mechanics might improve machine learning. Quantum mechanics is different to classical physics on a fundamental level: it deals in probabilities and makes a principle out of uncertainty. Quantum mechanics also expands physics to include interesting phenomena which cannot be explained using classical intuition. ... " — "Explainer: What Is Quantum Machine Learning And How Can It Help Us?". *Techxplore.Com*, 2019, <https://techxplore.com/news/2019-04-quantum-machine.html>.
>
>
>
* [A Future with Quantum Machine Learning](https://ieeexplore.ieee.org/abstract/document/8301126)
* [Quantum Computing, Deep Learning, and Artificial Intelligence](https://www.datasciencecentral.com/profiles/blogs/quantum-computing-deep-learning-and-artificial-intelligence)
---
**Business Applications and Practical Uses**:--
* [Is Your IT Department Prepared For The Next Wave Of Enterprise Tech?](https://www.forbes.com/sites/forbestechcouncil/2019/10/07/is-your-it-department-prepared-for-the-next-wave-of-enterprise-tech-2/#2d1ce7bb5787) (Quantum Computing is mentioned here.)
* [D-Wave Announces Quadrant Machine Learning Business Unit](https://www.dwavesys.com/press-releases/d-wave-announces-quadrant-machine-learning-business-unit)
---
**Further Reading**:--
* (<https://techxplore.com/news/2019-04-quantum-machine.html>)
* (<https://physics.aps.org/articles/v12/74?fbclid=IwAR2hVTFReQA-3lTNQXKEAtQN7KQ5Lz41wyM19DJDtS1H4fLDNivqxqh5G2k>)
* (<https://www.forbes.com/sites/bernardmarr/2017/09/05/how-quantum-computers-will-revolutionize-artificial-intelligence-machine-learning-and-big-data/#59b153bf5609>)
* (<http://www.messagetoeagle.com/what-is-quantum-machine-learning-and-how-can-it-help-us/>)
Upvotes: 1 <issue_comment>username_5: I would say that the answer to your question, is ***yes***. I could write a short book on the subject (that might be a good idea actually) but I will keep this response brief, although I am happy to answer any further questions you may have to which I possess the answers in the comments!
The primary reasons that I believe the newly blossoming field of Quantum Information will have a massive impact on the field of Machine Learning in general, are as follows:
The most simple reason for my belief is that the primary goal of Machine Learning is to create an entity that is capable of coherent, self-aware thought much like we exhibit as human beings. We know that the brain is what allows us to be capable of such feats, and thus I view the field as something like brain counterfeiting. Without going into esoteric detail, there are many subtleties of the brain's workings that are thought to be quantum mechanical in operation, and thus would suggest that the path of least resistance to replicating the system would require a quantum mechanical computational medium.
The second primary rationale which solidifies my position is the efficiency gained when mapping linear operations into a qubit-based formalism. Which is primarily due to the quantum phenomena referred to as ***Super Position***, which allows for a multiple qubit gate to not only work with the options of 00, 01, 10, and 11 (assuming a two-qubit gate); but to also work with any combination in-between, during the computation.
>
> This illustrates the concept of a ***[Bloch Sphere](https://en.wikipedia.org/wiki/Bloch_sphere#References)*** representation in a more concrete manner.
> [](https://i.stack.imgur.com/YQlNg.jpg)
>
>
>
Although when the result is obtained (this is what is referred to as collapsing the wave function) you will only still have a resulting state space with 2^n possibilities, where n is the number of qubits. This being said, there are very clever ways, by which one can design their algorithms to make full use of this technically infinite computational space before observing the final results.
**Conclusion**:
I hope that my answer is helpful to you in some way, although I am aware that it is not a very in-depth answer, I feel that it hits the primary reason why my personal belief is that there will be a wall which is hit, in the pursuit of a general AI, while we are limited to classical computation faculties; and thus will require quantum-based computation before we are able to truly mimic the brain's most well-kept secrets! The next couple of decades should be VERY interesting in the fields, keep a close eye on the latest happenings!
Upvotes: 1
|
2016/08/02
| 970
| 3,943
|
<issue_start>username_0: I believe a Markov chain is a sequence of events where each subsequent event depends probabilistically on the current event. What are examples of the application of a Markov chain and can it be used to create artificial intelligence? Would a genetic algorithm be an example of a Markov chain since each generation depends upon the state of the prior generation?<issue_comment>username_1: A Markov model includes the probability of transitioning to each state considering the current state. "Each state" may be just one point - whether it rained on specific day, for instance - or it might look like multiple things - like a pair of words. You've probably seen automatically generated weird text that *almost* makes sense, like [Garkov](https://blog.codinghorror.com/markov-and-you/) (the output of a Markov model based on the Garfield comic strips). That Coding Horror article also mentions the applications of Markov techniques to Google's PageRank.
Markov models are really only powerful when they have a lot of input to work with. If a machine looked through a lot of English text, it would get a pretty good idea of what words generally come after other words. Or after looking through someone's location history, it could figure out where that person is likely to go next from a certain place. Constantly updating the "input corpus" as more data is received would let the machine tune the probabilities of all the state transitions.
Genetic algorithms are fairly different things. They create functions by shuffling around parts of functions and seeing how good each function is at a certain task. A child algorithm will depend on its parents, but Markov models are interested mostly in predicting what thing will come next in a sequence, not creating a new chunk of code. You might be able to use a Markov model to spit out a candidate function, though, depending on how simple the "alphabet" is. You could even then give more weight to the transitions in successful algorithms.
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
> (this was intended as a comment, but turned out long and longer)
>
>
>
A couple of points to elaborate on [Ben's answer](https://ai.stackexchange.com/a/73/70):
* It is possible to generate different models (out of existing data!) and then look for the model that best fit new data (e.g. with [knn](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm)). Example:
+ States = {*sleep*, *eat*, *walk*, *work*}
+ Model 1: Most probable sequence on weekdays, say: sleep → sleep → eat → walk → work → work → eat → walk → sleep → sleep
+ Model 2: Most probable sequence on weekends, some: sleep → sleep → eat → walk → eat → walk → sleep → sleep
+ New data arrives: Which sequence is more probable that it came from? Check model 1, check model 2. Which fits better? → Assign
* Note that the previous example is oversimplified. Also note that a *unit time* is needed there (other than letters / words, for instance).
* You can *nest* Markov models. That means that you generate a model (a set of probabilities for all the states) in a "lower scale" and then use it in a more abstract model. For example, you can nest your day-scale model to a month or year (to include holidays, for instance).
Also [see this link for a nice introduction](http://blog.wolfram.com/2013/02/04/centennial-of-markov-chains/) and [some posts in crossvalidated](https://stats.stackexchange.com/questions/tagged/mcmc?sort=votes&pageSize=50).
---
As for the question if artificial intelligence can be created by using this kind of methods, my personal (easy) answer would be **no**, because they only relate data and probabilities and thus belong more to the statistics and machine learning branch.
A longer answer needs to take into account the [weak vs. strong AI question](https://ai.stackexchange.com/questions/74/what-is-the-difference-between-strong-ai-and-weak-ai).
Upvotes: 2
|
2016/08/02
| 810
| 3,495
|
<issue_start>username_0: What purpose does the "dropout" method serve and how does it improve the overall performance of the neural network?<issue_comment>username_1: Dropout means that every individual data point is only used to fit a random subset of the neurons. This is done to make the neural network more like an ensemble model.
That is, just as a random forest is averaging together the results of many individual decision trees, you can see a neural network trained using dropout as averaging together the results of many individual neural networks (with 'results' understood to mean activations at every layer, rather than just the output layer).
Upvotes: 4 [selected_answer]<issue_comment>username_2: I'll try to answer your questions using <NAME>'s ideas in dropout paper and his Coursera class.
**What purpose does the "dropout" method serve?**
>
> Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem.
>
>
>
So, dropout is a regularization technique that addresses the problem of overfitting.
**How does it improve the overall performance?** If by "performance" you mean "generalization", then it may improve generalization by avoiding over-fitting, as I said.
Upvotes: 2 <issue_comment>username_3: The original paper1 that proposed neural network dropout is titled: [Dropout: A simple way to prevent neural networks from overfitting](http://www.jmlr.org/papers/volume15/srivastava14a.old/source/srivastava14a.pdf). That tittle pretty much explains in one sentence what Dropout does. Dropout works by randomly selecting and removing neurons in a neural network during the training phase. Note that dropout is not applied during testing and that the resulting network doesn't dropout as part of predicting.
This random removal/dropout of neurons prevents excessive co-adaption of the neurons and in so doing, reduce the likelihood of the network [overfiting](https://en.wikipedia.org/wiki/Overfitting).
The random removal of neurons during training also means that at any point in time, only a portion of the original network is trained. This has the effect that you end up sort of training multiple sub-networks, for example:

It is from this repeated training of sub-networks as opposed to the entire network where the notion of neural network dropout being a sort of ensemble technique comes in. I.e the training of the sub-networks is similar to training numerous, relatively weak algorithms/models and combining them to form one algorithm that is more powerful than the individual parts.
***References:***
1: Srivastava, Nitish, et al. "Dropout: A simple way to prevent neural networks from overfitting." The Journal of Machine Learning Research 15.1 (2014): 1929-1958.
Upvotes: 3 <issue_comment>username_4: There are some great answers here. The simplest explanation I can give for dropout is that it randomly excludes some neurons and their connections from the network, while training, to stop neurons from "co-adapting" too much. It has the effect of making each neuron apply more generally and is excellent for stopping overfitting for large neural networks.
Upvotes: 2
|
2016/08/02
| 1,260
| 5,119
|
<issue_start>username_0: Can an AI program have an IQ? In other words, can the IQ of an AI program be measured? Like how humans can do an IQ test.<issue_comment>username_1: Short answer: No.
Longer answer: It depends on what IQ exactly is, and when the question is asked compared to ongoing development. The topic you're referring to is actually more commonly described as AGI, or Artificial General Intelligence, as opposed to AI, which could be any narrow problem solving capability represented in software/hardware.
[Intelligence quotient](https://en.wikipedia.org/wiki/Intelligence_quotient) is a rough estimate of how well humans are able to generally answer questions they have not previously encountered, but as a predictor it is somewhat flawed, and has many criticisms and detractors.
Currently (2016), no known programs have the ability to generalize, or apply learning from one domain to solving problems in an arbitrarily different domain through an abstract understanding. (However there are programs which can effectively analyze, or break down some information domains into simpler representations.) This seems likely to change as time goes on and both hardware and software techniques are developed toward this goal. Experts widely disagree as to the likely timing and approach of these developments, as well as to the most probable outcomes.
It's also worth noting that there seems to be a large deficit of understanding as to what exactly consciousness is, and disagreement over whether there is ever likely to be anything in the field of artificial intelligence that compares to it.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It all depends of what your A.I. can do. Even humans cannot do everything.
If your AI program is so smart, ask it to take the general IQ tests for humans. Because the real IQ tests are made of several questions from different areas, so in that way you can measure IQ of your AI.
This is because the **IQ** means the tests which are **designed** to assess human intelligence.
>
> An intelligence quotient (IQ) is a total score derived from one of several standardized tests designed to assess human intelligence.[wiki](https://en.wikipedia.org/wiki/Intelligence_quotient)
>
>
>
So there is no any other way of measuring IQ without taking IQ test, otherwise it won't be IQ (very logical).
If your program is not so smart, you should look for specific tests related to the expertise or problem being solved. Ideally let it compete with humans who has the same expertise in that area, but it's important make the test on the same ground/level.
For example the intelligence of [Deep Blue](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)) project was measured by playing chess with Kasparov. Then if world champion cannot win the game, who will?
If you're writing program to play a game, make it play with compete with humans and measure the intelligence in terms of score.
---
The equivalent of IQ for AI is a Turing Test (like [MIST](https://ai.stackexchange.com/q/1397/8) and other), see:
* [Is the Turing Test, or any of its variants, a reliable test of artificial intelligence?](https://ai.stackexchange.com/q/15/8)
Upvotes: 2 <issue_comment>username_3: The other answers are correct that machine IQ test results are currently **not** indicative of machine intelligence. One of the surprising facts of human intelligence is that performance on almost all cognitive tasks are correlated with each other; that is, there is such a thing as 'general smartness' and IQ tests attempt to measure that thing.
People *have* built programs that take IQ tests, however, and some of them perform quite well. Raven's Progressive Matrices, a visual pattern recognition IQ test, is an easy target for AI (see [this paper](https://www.researchgate.net/publication/288211280_Solving_Raven's_IQ-tests_An_AI_and_cognitive_modeling_approach) as representative) and another group [has constructed an AI](http://arxiv.org/abs/1509.03390) that performs about as well as a 4 year old on the verbal intelligence portion of a standard childhood IQ test.
Upvotes: 2 <issue_comment>username_4: All these questions are fully covered in the book *[The Measure of All Minds
Evaluating Natural and Artificial Intelligence](https://www.cambridge.org/us/academic/subjects/computer-science/artificial-intelligence-and-natural-language-processing/measure-all-minds-evaluating-natural-and-artificial-intelligence?format=HB&isbn=9781107153011)* (Orallo, 2017)
An excerpt from the description give a good overview:
>
> By replacing the dominant anthropocentric stance with a universal perspective where living organisms are considered as a special case, long-standing questions in the evaluation of behavior can be addressed in a wider landscape. Can we derive task difficulty intrinsically? Is a universal g factor - a common general component for all abilities - theoretically possible? Using algorithmic information theory as a foundation, the book elaborates on the evaluation of perceptual, developmental, social, verbal and collective features,
>
>
>
Upvotes: 0
|
2016/08/02
| 1,912
| 7,388
|
<issue_start>username_0: Why would anybody want to use "hidden layers"? How do they enhance the learning ability of the network in comparison to the network which doesn't have them (linear models)?<issue_comment>username_1: Hidden layers by themselves aren't useful. If you had hidden layers that were linear, the end result would still be a linear function of the inputs, and so you could collapse an arbitrary number of linear layers down to a single layer.
This is why we use nonlinear [activation functions](https://en.wikipedia.org/wiki/Activation_function), like RELU. This allows us to add a level of nonlinear complexity with each hidden layer, and with arbitrarily many hidden layers we can construct arbitrarily complicated nonlinear functions.
Because we can (at least in theory) capture any degree of complexity, we think of neural networks as "universal learners," in that a large enough network could mimic any function.
Upvotes: 2 <issue_comment>username_2: "Hidden" layers really aren't all that special... a hidden layer is really no more than any layer that isn't input or output. So even a very simple 3 layer NN has 1 hidden layer. So I think the question isn't really "How do hidden layers help?" as much as "Why are deeper networks better?".
And the answer to that latter question is an area of active research. Even top experts like <NAME> and <NAME> will freely admit that we don't really understand why deep neural networks work. That is, we don't understand them in complete detail anyway.
That said, the theory, as I understand it goes something like this... successive layers of the network learn successively more sophisticated features, which build on the features from preceding layers. So, for example, an NN used for facial recognition might work like this: the first layer detects edges and nothing else. The next layer up recognizes geometric shapes (boxes, circles, etc.). The next layer up recognizes primitive features of a face, like eyes, noses, jaw, etc. The next layer up then recognizes composites based on combinations of "eye" features, "nose" features, and so on.
So, in theory, deeper networks (more hidden layers) are better in that they develop a more granular/detailed representation of a "thing" being recognized.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Actually, the hierarchical learning explanation given by [username_2](https://ai.stackexchange.com/a/51/2444) is not that acceptable anymore ([This](https://youtu.be/Z6rxFNMGdn0?t=260) was also indicated by Ian Goodfellow). Since there are neural networks with 150 layers or more, and this explanation does not make sense for such neural networks. However, we can think of it as solving the knots of high dimensional manifolds, i.e. we transform the input into high dimensional space, and this helps us to find a better representation of the data.
A geometric interpretation was explained as such in the book *Deep Learning with Python* by <NAME>:
>
> ...you can interpret a neural network as a very complex geometric transformation in a high-dimensional space, implemented via a long series of simple steps...
>
>
>
>
> Imagine two sheets of colored paper: one red and one blue. Put one on top of the other. Now crumple them together into a small ball. That crumpled paper ball is your input data, and each sheet of paper is a class of data in a classification problem. What a neural network (or any other machine-learning model) is meant to do is figure out a transformation of the paper ball that would uncrumple it, so as to make the two classes cleanly separable again. With deep learning, this would be implemented as a series of simple transformations of the 3D space, such as those you could apply on the paper ball with your fingers, one movement at a time. Uncrumpling paper balls is what machine learning is about: finding neat representations for complex, highly folded data manifolds. At this point, you should have a pretty good intuition as to why deep learning excels at this: it takes the approach of incrementally decomposing a complicated geometric transformation into a long chain of elementary ones, which is pretty much the strategy a human would follow to uncrumple a paper ball. Each layer in a deep network applies a transformation that disentangles the data a little—and a deep stack of layers makes tractable an extremely complicated disentanglement process.
>
>
>
I suggest you to read [this brilliant blog post](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) to learn about the topological interpretation of deep learning.
Also, [this](https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html) toy interactive code may help you.
In the context of machine learning, the concept of a *manifold* can be illustrated as in the following figure.
[](https://i.stack.imgur.com/bpS1y.png)
In the first part, data are 3-dimensional. However, we can find a transformation to get the second image, which shows that data is actually artificially high dimensional, i.e. it is a 2-dimensional manifold in 3-D space. This example may be thought of as a classification problem, and colors may represent classes, and we can find a trivial representation of the data for classification.
Another example could be following figures from the blog I mentioned. In here, this classification problem cannot be solved without having a layer that has 3 or more hidden units, regardless of depth. So the notion of high dimensional transformation is important.
[](https://i.stack.imgur.com/NfkSD.jpg)
We can map this data to 3-D, and find a plane to separate them.
[](https://i.stack.imgur.com/tyRcb.jpg)
Upvotes: 2 <issue_comment>username_4: One aspect that I'd like to add to the previous answers is the so-called *Curse of dimensionality*. This concept refers to the problem that many algorithms have a time complexity that grows exponentially with the dimension of the data.
As a simple example, let us consider a set $\{0,1\}^{D}$ that has only two values per dimension. For example, $\{0,1\}^{2} = \{(0,0),(0,1),(1,0),(1,1)\}$ and $(0,1,0) \in \{0,1\}^{3}$. Now imagine that you are given a function $f: \{0,1\}^{\times D} \rightarrow \{TRUE, FALSE\}$ that outputs TRUE exactly for one particular input. The goal is to determine that input.
In the example, if nothing else is known about f, the best thing one can do is to try the inputs one after another. However, $\{0,1\}^{D}$ has $2^D$ elements. So the number of inputs one has to try out will in general be roughly $2^D$ as well.
However, there exist examples suffering from a curse of dimensionality that can be solved with deep learning, i.e. using neural networks with **many** hidden layers.
One example of great practical importance is given by high-dimensional partial differential equations, see e.g this report:
<http://www.sam.math.ethz.ch/sam_reports/reports_final/reports2017/2017-44_fp.pdf>
or this example for heat equations:
<https://arxiv.org/abs/1901.10854>
I also found this review on using deep learning to overcome the curse of dimensionality:
<https://cbmm.mit.edu/sites/default/files/publications/02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf>
Upvotes: 0
|
2016/08/02
| 442
| 1,913
|
<issue_start>username_0: How would you estimate the generalization error? What are the methods of achieving this?<issue_comment>username_1: Generalization error is the error obtained by applying a model to data it has not seen before. So, if you want to measure generalization error, you need to remove a subset from your data and don't train your model on it. After training, you verify your model accuracy (or other performance measures) on the subset you have removed since your model hasn't seen it before. Hence, this subset is called a [test set](https://en.wikipedia.org/wiki/Test_set).
Additionally, another subset can also be used for parameter selection, which we call a [validation set](https://en.wikipedia.org/wiki/Test_set#Validation_set). We can't use the training set for parameter tuning, since it does not measure generalization error, but we can't use the test set too since our parameter tuning would overfit test data. That's why we need a third subset.
Finally, in order to obtain more predictive performance measures, we can use many different train/test partitions and average the results. This is called [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Error Estimation is a subject with a long history. The test-set method is only one way to estimate generalization error. Others include resubstitution, cross-validation, bootstrap, posterior-probability estimators, and bolstered estimators. These and more are reviewed, for instance, in the book: Braga-Neto and Dougherty, "Error Estimation for Pattern Recognition," IEEE-Wiley, 2015.
Upvotes: 1 <issue_comment>username_3: It's basically not possible to test besides some empirical experiments. All the generalization bounds only apply if your process actually follows the model assumptions which you don't actually know to be true.
Upvotes: -1
|
2016/08/02
| 560
| 1,947
|
<issue_start>username_0: I've implemented [the reinforcement learning algorithm](https://en.wikipedia.org/wiki/Reinforcement_learning) for an agent to play [snappy bird](https://github.com/admonkey/snappybird) (a shameless cheap ripoff of flappy bird) utilizing a q-table for storing the history for future lookups. It works and eventually achieves perfect convergence after enough training.
Is it possible to implement a neural network to do function approximation in order to accomplish the purpose of the q-table? Obviously, storage is a concern with the q-table, but it doesn't seem to ever train with the neural net alone. Perhaps training the NN on an existing q-table would work, but I would like to not use a q-table at all if possible.<issue_comment>username_1: [<NAME>'s blog](http://karpathy.github.io/2016/05/31/rl/) has a tutorial on getting a neural network to learn pong with reinforcement learning. His commentary on the current state of the field is interesting.
He also provides a whole bunch of links (David Silver's [course](http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html) catches my eye). [Here is a working link to the lecture videos.](https://www.youtube.com/watch?v=2pWv7GOvuf0)
Here are demos of DeepMinds game playing.
Get links to the papers at <NAME>y's blog above
- [rat fps](https://www.youtube.com/watch?v=MAMuNUixKJ8)
* [nice demos at 19 minutes into this](http://livestream.com/oxuni/StracheyLectureDrDemisHassabis/videos/113380152)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes, it is possible. The field of *deep reinforcement learning* is all about using deep neural networks (that is, neural networks with at least one hidden layer) to *approximate* value functions (such as the $Q$ function) or policies.
Have a look at the paper [A Brief Survey of Deep Reinforcement Learning](https://arxiv.org/abs/1708.05866) that gives a brief survey of the field.
Upvotes: 0
|
2016/08/02
| 1,414
| 5,667
|
<issue_start>username_0: I read that in the spring of 2016 a computer [Go program](https://en.wikipedia.org/wiki/Computer_Go) was finally able to beat a professional human for the first time.
Now that this milestone has been reached, does that represent a significant advance in artificial intelligence techniques or was it just a matter of even more processing power being applied to the problem?
What are some of the methods used to program the successful Go-playing program?
Are those methods considered to be artificial intelligence?<issue_comment>username_1: It doesn't make much sense to have a single threshold with "unintelligent" below it and "intelligent" above it.
I think it makes more sense to have a gradation of intelligence by cognitive task. Inverting a matrix is a 'cognitive task,' and one where working memory pays off immensely; computers have been much better at that cognitive task than humans for a long time.
What the AlphaGo victory represents has several components. One is that we have algorithms that are competitive with the best board-game playing humans at doing tactical and strategic thinking in the well-described world of Go. Another is that the deeper structure of the human visual system seems to have been duplicated, and so we have algorithms that can recognize patterns as well as humans--with *very* limited resolution. (AlphaGo is seeing one pixel per stone, whereas we have very, very high-resolution eyes and the visual cortex to match.)
Different people have different intuitions, but it seems to me that visual intelligence is a huge component of human intelligence in general. If we know most of the secrets of human visual intelligence, that means there might be many tasks that computers could now perform as well as humans (if provided the correct training data).
Upvotes: 3 <issue_comment>username_2: There are at least two questions in your question:
>
> What are some of the methods used to program the successful go playing program?
>
>
>
and
>
> Are those methods considered to be artificial intelligence?
>
>
>
The first question is deep and technical, the second broad and philosophical.
The methods have been described in: [Mastering the Game of Go with Deep Neural Networks and Tree Search](https://www.researchgate.net/publication/292074166_Mastering_the_game_of_Go_with_deep_neural_networks_and_tree_search).
The problem of Go or perfect information games in general is that:
>
> exhaustive search is infeasible.
>
>
>
So the methods will concentrate on shrinking the search space in an efficient way.
Methods and structures described in the paper include:
* learning from expert human players in a supervised fashion
* learning by playing against itself (reinforcement learning)
* Monte-Carlo tree search (MCTS) combined with policy and value networks
The second question has no definite answer, as you will have at least two angles on AI: [strong](https://en.wikipedia.org/wiki/Chinese_room#Strong_AI) and [weak](https://en.wikipedia.org/wiki/Weak_AI).
>
> All real-world systems labeled "artificial intelligence" of any sort are **weak AI at most**.
>
>
>
So, yes, it is artificial intelligence, but it is non-sentient.
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> Now that this milestone has been reached, does that represent a significant advance in artificial intelligence techniques or was it just a matter of ever more processing power being applied to the problem?
>
>
>
Neither, really. It is a milestone and a significant advance in computers beating humans in games, but the techniques used are only relevant to that game, not for other purposes in AI.
The solution lies in humans analyzing the game and implementing algorithms for finding a good move. This is the main reason that a computer can beat the humans, together with the fact that it can calculate much faster and that it doesn't make really bad moves by not seeing something.
Processing power helps, but the game-tree complexity for Go is very large, estimated to be larger than 10200, whereas the game-tree complexity for chess is only 10120 (known as the Shannon number), so chess is less hard. This means that for neither chess nor go a database can be created with all possible positions.
The fact that Deep Blue beat Kasparov in a six-game match in 1997 was quite a development since this was one of the first "hard" games where a computer beat a top human. But it still isn't really Artificial Intelligence, more analyzing the game. Implementing an opening and endgame book was a large part, the middle game was done using analysis, I don't know the details.
Upvotes: 2 <issue_comment>username_4: We've had many discussions on what constitutes Artificial Intelligence, and my takeaway has been that decision-making is the core requirement of AI, regardless of the optimality of that decision.
In this conception, Nimatron *(1939, [US2215544A](https://patents.google.com/patent/US2215544))* might be thought of as the first proper AI, pending verification of a [a fabled Babbage Tic-Tac-Toe machine](https://deserthat.wordpress.com/2010/05/18/earlyer-computer-games-babbage-and-nimatron/).
*But*, the question as [to whether a simple switch represents the most basic form of intelligence](https://ai.stackexchange.com/questions/3847/is-transistor-the-first-artificial-intelligence) has also been raised...
I think a distinction between these decision-making devices and earlier algorithmic implementations such as water clocks, is that the water clocks cannot be said to make a decision in the sense of maximizing chance of success at some goal.
Upvotes: 2
|
2016/08/02
| 807
| 3,673
|
<issue_start>username_0: I have a background in Computer Engineering and have been working on developing better algorithms to mimic human thought. (One of my favorites is Analogical Modeling as applied to language processing and decision making.) However, the more I research, the more I realize just *how* complicated AI is.
I have tried to tackle many problems in this field, but sometimes I find that I am reinventing the wheel or am trying to solve a problem that has already been proven to be unsolvable (ie. the halting problem). So, to help in furthering AI, I want to better understand the current obstacles that are hindering our progress in this field.
For example, time and space complexity of some machine learning algorithms is super-polynomial which means that even with fast computers, it can take a while for the program to complete. Even still, some algorithms may be fast on a desktop or other computer while dealing with a small data set, but when increasing the size of the data, the algorithm becomes intractable.
What are other issues currently facing AI development?<issue_comment>username_1: One obstacle to the development of AI is the fundamental limitations of computer memory. Computers, at a fundamental level, can only work with bits. This limits the type of information that they can describe.
EDIT:
The precise nature and complexity of human memory isn't fully understood, but I would argue that at the very least, human memory is well adapted for the types of tasks that humans perform. Thus, computer memory, even if theoretically capable of representing everything that human memory can, is probably inefficient and poorly structured for such a task.
Upvotes: -1 <issue_comment>username_2: 1. we don't really know what intelligence is.
2. we don't truly understand the best model of intelligence we have available (human intelligence) works.
3. we're trying to replicate human intelligence (to some extent) on hardware which is quite different from the hardware it runs on in reality.
4. the human brain (our best model of intelligence) is mostly a black-box to us, and it's difficult to probe/introspect its operation without killing the test subject. This is, of course, unethical and illegal. So progress in understanding the brain is very slow.
Combine those factors and you can understand why it's difficult to make progress in AI. In many ways, you can argue that we're shooting in the dark. Of course, we have made *some* progress, so we know we're getting some things right. But without a real comprehensive theory about *how* AI should/will work, we are reduced to a lot of trial and error and iteration to move forward.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I am assuming by AI you mean AG(eneral)I, not machine learning or expert systems tuned for specific tasks.
In addition to @username_2's answer, sometimes we run out of samples to train and sometimes computers became so slow to process enough samples to work in manageable timescales. @username_1 mentioned memory but on the surface, our supercomputers have more than enough memory to store a human brain matrix. But we lack the ability to simulate it real time. After we are able to do that, we also need to connect external input, even more processing power is required for that. Even that would not be enough to simulate a human brain fully as biochemistry plays an important role.
One final note would be there is little incentive to develop AGI other than understanding how the human mind works. There are classification algorithms, expert systems, knowledge engines that can out-perform even the best humans for specific tasks.
Upvotes: 2
|
2016/08/02
| 635
| 2,829
|
<issue_start>username_0: Why somebody would use SAT solvers ([Boolean satisfiability problem](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem)) to solve their real world problems?
Are there any examples of the real uses of this model?<issue_comment>username_1: One obstacle to the development of AI is the fundamental limitations of computer memory. Computers, at a fundamental level, can only work with bits. This limits the type of information that they can describe.
EDIT:
The precise nature and complexity of human memory isn't fully understood, but I would argue that at the very least, human memory is well adapted for the types of tasks that humans perform. Thus, computer memory, even if theoretically capable of representing everything that human memory can, is probably inefficient and poorly structured for such a task.
Upvotes: -1 <issue_comment>username_2: 1. we don't really know what intelligence is.
2. we don't truly understand the best model of intelligence we have available (human intelligence) works.
3. we're trying to replicate human intelligence (to some extent) on hardware which is quite different from the hardware it runs on in reality.
4. the human brain (our best model of intelligence) is mostly a black-box to us, and it's difficult to probe/introspect its operation without killing the test subject. This is, of course, unethical and illegal. So progress in understanding the brain is very slow.
Combine those factors and you can understand why it's difficult to make progress in AI. In many ways, you can argue that we're shooting in the dark. Of course, we have made *some* progress, so we know we're getting some things right. But without a real comprehensive theory about *how* AI should/will work, we are reduced to a lot of trial and error and iteration to move forward.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I am assuming by AI you mean AG(eneral)I, not machine learning or expert systems tuned for specific tasks.
In addition to @username_2's answer, sometimes we run out of samples to train and sometimes computers became so slow to process enough samples to work in manageable timescales. @username_1 mentioned memory but on the surface, our supercomputers have more than enough memory to store a human brain matrix. But we lack the ability to simulate it real time. After we are able to do that, we also need to connect external input, even more processing power is required for that. Even that would not be enough to simulate a human brain fully as biochemistry plays an important role.
One final note would be there is little incentive to develop AGI other than understanding how the human mind works. There are classification algorithms, expert systems, knowledge engines that can out-perform even the best humans for specific tasks.
Upvotes: 2
|