date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2015/04/30
| 1,046
| 4,498
|
<issue_start>username_0: For one of my class projects, the teacher coded the entire solution, then took out a few sections that we are supposed to complete for our assignment. This incomplete code was distributed to the class as a starting point. When he was writing the solution, he used [source control](https://en.wikipedia.org/wiki/Revision_control). He forgot to delete his repository, so all of his commits are there... You can checkout the commits to get the full solution. I don't think he is aware of it, because he hasn't brought it up in class.
I found this while I was working on my solution. Once I found it, it was difficult to find a different path to the solution. I think my solution is dissimilar, but I feel I am playing a dangerous game here.
I'm not sure what I should do here. I could choose to not tell him and hope they don't catch it, or I could choose to tell him and basically admit I had access to the solution while I was working on it.<issue_comment>username_1: Send him a polite email explaining that, while you completed the assignment on your own (which I am assuming to be true), by stumbling upon the solutions in advance you had a hard time doing anything different. Professors are people--he should recognize his mistake and appreciate that you were forthcoming. Chances are he'll reassign the project or discount its weight toward your grade and just tell people to review it anyway because you need to know the material. The sooner you say something the better though. Indeed, after a while it may look like you were trying to hide it from him.
Upvotes: 6 <issue_comment>username_2: You should tell him. It's going to be a headache for him when he finds out, so you may as well give him a heads up. Tell him you've gotten a solution which is similar but that does contain original thought and see what he says. Best case, he'll be appreciative of your honesty. Basically the way I see it is you have one path that's ethically sketchy and one that's a good thing to do (because, again, he'll have to deal with this). So, you know. Do the right thing.
Upvotes: 5 <issue_comment>username_3: Part of finding a solution for a problem is taking a small amount of time looking for known solutions in available resources. The teacher should not only appreciate your honesty, but also your effort to look for existing solutions and reusing the best parts of it.
You did a good job and could tell him that you did some research during the development and found that a similar solution to yours was found in the source control history.
Upvotes: 2 <issue_comment>username_4: I am a retired university teacher and have made similar mistakes a couple of times during my 32 years at the Royal Institute of Technology in Stockholm.
Each time, be it on a written exam, or on a home assignment, it was rapidly discovered, mainly because students realized it would cause a problem when grading the results.
So, if you don't report it to the teacher, someone else most probably will and all those who took advantage will stick out in a bad manner.
Do tell him.
Upvotes: 5 <issue_comment>username_5: To be honest I don't know if there is anything you can do at this point. You have completed your assignment and as a fellow student I feel like once you know the answer, its hard to unseen the answer and forget about it and come up with a new one. And if you do tell the teacher about this, most students will probably have realized this too and the sneaky ones will probably have copied it and stored it somewhere before redistributing it again (believe me, and I am probably one of the sneaky ones). So yeah if you do tell him, I don't think it changes anything to be honest. I have a feeling my answer is a bad one.
Upvotes: -1 <issue_comment>username_6: Tell him. Failing to do so may result in dire repercussions for your academic career. But don't panic, honesty is always appreciated.
Upvotes: -1 <issue_comment>username_7: Tell him. Your integrity is worth more than any pass mark. If your teacher isn't teaching you that, then he isn't worth jack.
Upvotes: 2 <issue_comment>username_8: Yes, tell the teacher. It's just the *right* thing to do.
If for some reason you need a self-serving reason: the teacher will immediately realise that your solution is exactly the same as theirs, the whole assignment will have to be reset and you and everyone else will have to do another piece of work. Tell them now so it can be rectified, it's better for everyone.
Upvotes: 0
|
2015/04/30
| 1,413
| 5,837
|
<issue_start>username_0: I have read the Springer [copyrights](http://www.springer.com/?SGWID=4-102-45-154182-0) and [self-archiving policy](https://www.springer.com/gp/open-access/authors-rights/self-archiving-policy/2124), but they are a bit confusing. For example in the first of them, you may find this:
>
> Author may self-archive an author-created version of his/her Contribution on his/her own website and/or the repository of Author’s department or faculty.
>
>
>
And here is the quote from the second source:
>
> Authors may self-archive the author’s accepted manuscript of their articles on their own websites. Authors may also deposit this version of the article in any repository, provided it is only made publicly available 12 months after official publication or later. He/ she may not use the publisher's version (the final article), which is posted on SpringerLink and other Springer websites, for the purpose of self-archiving or deposit. Furthermore, the author may only post his/her version provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be provided by inserting the DOI number of the article in the following sentence: “The final publication is available at Springer via <http://dx.doi.org/[insert> DOI]”.
>
>
>
Do I understand correctly that I can put my own version of the manuscript at my own (or my institute's) website immediately after acceptance, but I can submit it to arXiv only 12 months later after official publication? If so, it seems to me it doesn't make sense. If the manuscript is made publicly available just after acceptance at my own website, why cannot it be made publicly available at the same time at arXiv?<issue_comment>username_1: Here's a [closely related question](https://academia.stackexchange.com/questions/42792/how-can-springer-and-wiley-put-a-12-month-embargo-on-posting-post-review-revisio).
And yes, I think it's a bizarre and inexcusable policy--I was so shocked when I realized that they are demanding this that I asked the question linked above.
I am not a lawyer but it is not clear to me why you could not submit to the arXiv prior to signing the copyright transfer agreement. In the absence of a contract with Springer, I cannot see how they can possibly have any say over what you do with your manuscript. They can refuse to take manuscript that is posted on the arXiv, and they can refuse to do business with you again -- but if you haven't signed a contract, you cannot have violated one.
That said, they leave open a loophole big enough to drive a train through, and seem to me that they do so deliberately. From their policy right after the part you quoted:
>
> Prior versions of the article published on non-commercial pre-print servers like arXiv.org can remain on these servers and/or can be updated with the author’s accepted version.
>
>
>
So it seems to me (again not a lawyer) that provided that you post to the arXiv pre-submission, you are welcome to update it with the accepted version without waiting the 12 months after publication. That seems the obvious course of action, especially given all of the reasons to post to the arXiv at submission time anyway.
Upvotes: 4 <issue_comment>username_2: The short answer is: yes, staggered posting rights like this are very common - many journals distinguish between your own website, your institutional repository, and a broader repository (eg arXiv, pubmed), with different rules about what version of the article can be posted and when; some have also begun to provide a special category for sites like ResearchGate.
The underlying answer -
**Why on earth do they do this?**
On the one hand, this doesn't make sense - the PDF is the PDF and once it's picked up by a search engine, it's going to be readable wherever it's hosted. But consider it in terms of discoverability and scale:
* one hundred papers from the journal on one hundred obscure personal sites;
* one hundred papers from the journal on twenty well-organised institutional sites; or
* one hundred papers from the journal on one disciplinary repository
From a publisher's perspective, the first option is not something to worry about, while the second & particularly third options look quite scary. Remember, what they really don't want is everyone saying "great, we can get this all from arXiv, cancel next year's subscription please". So an embargo period gets attached to the repository copies - many journals (most prominently PNAS) manage fine on a subscription basis while still making older papers freely accessible after a year or two, and so it's well-understood that allowing delayed access in one form or another will not ruin the subscription income.
Now, mass cancellation because of self-deposited papers being available instantly is a bit of a bogeyman. No-one's really shown it would happen on a large scale (and indeed arXiv suggests otherwise); library budgets are not (yet) squeezed enough that we've had to start thinking seriously about it; and in any case "big deal" subscription models often make it impractical to cancel specific titles. But it's looming as the threat and most publishers simply don't want to risk it... so they produce very conservative guidance on what you're allowed to do, and work from there.
Over the past few years, many of these publisher limits have loosened slightly as they discover the sky didn't fall, which I suppose is something. At the time of writing, at a very loose generalisation, policies for non-medical scientific journals are mostly converging along the lines of "accepted MS immediately on your own site, accepted MS in a repository after a year, publisher/proof PDF never", but there are a thousand variations.
Upvotes: 6 [selected_answer]
|
2015/04/30
| 1,929
| 7,945
|
<issue_start>username_0: I am working towards my PhD for 2.5 years now. A few of the students who joined after me under my advisor have recently published in top-tier conferences. Though I have worked pretty closely with them and mostly have been involved in troubleshooting their problems while they were conducting experiments or studying for those papers.
I am very happy for them and have nothing against them, but when I see that I don't have a single publication to my name, I get a bit depressed. Earlier, this used to give me motivation and my killer instinct just made me work harder. But as the time has passed, I feel demotivated. My advisor still considers me amongst his best students. Mostly, I didn't publish my work till now because I didn't find it doing justification to my expectations. And I wanted to send it to a top-tier journal which would require extensive evaluation and mathematical rigor, which I am still working upon.
Is it common to be intimidated by such such situations and what is the best way to overcome the feeling that you haven't done anything.
Note:
* I have a lot work but unable to overcome this feeling when every few months, someone gets a paper accepted to a conference.
* I am reluctant in sending papers to conferences since I am a self sponsored student and also not full time. So I want to utilize the funds available to me judiciously. The conferences won't fund me since there's a rider of being a full time student.<issue_comment>username_1: First off: if you are not a full time student, it's natural that you will take longer than others (who I presume *are* full time students). Don't feel too pressured by this.
Second, if you are involved with your colleagues' papers, should you be a coauthor on them? Especially if this took time you would otherwise have spent on your own projects?
Most importantly, have you discussed your specific problem with your advisor? It appears like getting a publication accepted would be helpful to your peace of mind. Your question has a faint whiff of perfectionism, which may prevent you from submitting and publishing. Your advisor would be the logical person to discuss this with. He should have a good understanding of the expectations in your specific field. If he considers you one of his best students, this is a good starting point!
I would recommend that you write your advisor an email along these lines:
>
> Recently, some other students in our group got accepted in upper tier
> conferences. I am extremely happy for them, and this motivates me
> immensely to follow their lead. I would like to get something accepted
> in a comparable venue soon. However, I feel unsure of what level of
> quality is sufficient for publication - at what point do I stop
> polishing and submit?
>
>
> Could we have a meeting in which we discuss some of my current
> projects and work out what I still need to do to get a manuscript into
> submittable form, ideally with a time plan? Thank you!
>
>
>
Finally, I assume you have already read and taken to heart [this canonical question](https://academia.stackexchange.com/q/2219/4140), right?
Upvotes: 6 [selected_answer]<issue_comment>username_2: I assume you are in computer science, since you are talking about top-tier conferences. So, I am going to talk from a pure CS perspective, since in CS in order to get a PHD you need to have publications.
First of all, do not let your ego get in the way of you getting a PHD. For getting a CS PHD, you need a critical mass of good and solid publications. That means a sufficient number (not just one) of good to excellent papers. According to the [CORE conference ranking](http://www.core.edu.au/index.php/conference-rankings):
```
Conferences are assigned to one of the following categories:
- A* - flagship conference, a leading venue in a discipline area
- A - excellent conference, and highly respected in a discipline area
- B - good conference, and well regarded in a discipline area
- C - other ranked conference venues that meet minimum standards
```
I know many people who took a CS PHD and went to successful careers afterward that during the duration of their PHD did not have any A\* conference publication. Instead they had 1-2 A publications, a couple of Bs and some additional workshop / demo / poster publications. IMHO this is the most realistic plan for actually getting a good CS PHD and get it relatively fast. So, although it is good to aim high, you should know that getting an A\* conference publication (VLDB, SIGMOD, FOCS, SODA) is not simply possible for all CS PHD students (and certainly not necessary) during their PHD duration.
In this sense, aiming for that one perfect publication that would take 4 years to write is counter-productive for you, because:
* If it gets rejected you start from null
* It is not enough to get you a PHD, because you cannot have a CS PHD thesis of 30 pages
* Everyone will assume that it is your supervisor's doing if you cannot follow through with equally good publications
* If you are Mr. Nobody in single-bind conferences, it will be harder for your work to be accepted initially, since nobody knows you or trusts you
* It is very difficult to do so, because writing a paper of this magnitude needs experience and this kind of experience can come only after writing many good papers and many rejection / resubmission circles for mere mortals like us (unless you are <NAME>)
* If the paper is that good, it really does not make that difference where it was submitted. I know seminal papers presented in B-conferences with 300 - 1000 citations and papers in A\* conference that nobody cites.
Also, your plan of sending to a journal is even worse, since it might take a year before it gets accepted and in the meantime many other people could catch up with you. And even if you did write this excellent paper, it will take some time for that publication to take off and be known and until then what? You will have finished a PHD with a citation number of less than 10. This is also bad on all counts.
On the other hand, putting out consistently good-solid publications in respectable A or B conferences on a frequency of 1-2 papers per year, means that in 3-5 years of your PHD, you may have 4-8 good publications and:
* People will know you for you (and not your supervisor) because you have
proven yourself consistently by producing good papers on regular intervals
* You will have gained better experience on how to actually write good papers and sell your ideas better
* You will have a better citation index, due to your citing your work
and other people citing it because the related community knows and respects you
* You will get review requests that will inform you faster on the current state-of-the-art
* You will find faster external collaborators which means even more good papers for you.
In a nutshell, my advice is: If your work has produced good-solid results, wrap it up and publish it, instead of aiming for perfection. Once your first work is out, everything else will be on its way.
Upvotes: 4 <issue_comment>username_3: As other have mentioned, you sound like a perfectionist. This probably won't surprise you.
You might not be aware that perfectionism can be a problem, and it's an endemic one among researchers — it's a common obstacle to getting a PhD. The pattern matches your description: you set yourself excessively high standards, you miss them so they make your life worse, yet you keep attached to those standards, and ultimately you achieve worse results, both for motivations problems, and because you for instance self-censor your work, or work too much, or both.
I've had the same problem, and I've been recommended reading this self-help material. Of course, it's up to you to decide whether you recognize yourself in this pattern.
<http://www.cci.health.wa.gov.au/resources/infopax.cfm?Info_ID=52>
Upvotes: 3
|
2015/04/30
| 1,174
| 5,160
|
<issue_start>username_0: I almost didn't get my BS degree because I have never successfully tolerated a foreign language course. My most recent attempt was no exception. I paced about in front of the building trying unsuccessfully to force myself to enter the classroom. Surely this represents a severe defect of my character, but I maintain that an alternate teaching strategy would make success possible for someone like me. An extreme social phobia of this type is rare (<2%). There will never be a house reform because those who gravitate to teaching language are extroverted, and communication is a social activity. so there is little basis for empathy. This is something that I need to figure out, because I learned that the math PhD program requires fourth-semester proficiency. I am sure that the masses love this ubiquitous type of social teaching philosophy, and that the best way to handle statistical anomalies is to tell them to "get over themselves" and "get with the program". This is what I have tried to do, so much against my nature. Language is the only department where this is a problem for me. The format is always the same.
How can someone with a debilitating social phobia get through a foreign language course?<issue_comment>username_1: First, social phobia is not a character defect. It is a recognized medical condition. You are not worth less as a human being if you suffer from a phobia.
Next, depending on your school, you may be able to get a medical exemption from certain requirements. You may want to discuss this with your local student services. (By email if meeting people in person is impossible to you.)
However, *I would strongly advise against this approach*. You will need to interact with people after leaving college, too, so avoiding the problem is not a good strategy in the long run. Instead, I would recommend that you actively work on this issue.
The good news is that social phobia is very amenable to cognitive behavioral therapy (CBT). The accepted form of therapy is a desensitization approach, where you will progressively learn to tolerate being around people.
No, this will not be easy. You will need to work on your disability. You will encounter setbacks. But your student services should again be able to point you towards resources and therapists that can help you. And there is no better time for doing this therapy than now, when you have a more-or-less flexible schedule, and before you hit the job market, where social phobia will be an *enormous* problem - regardless of which career you want to pursue.
Good luck!
Upvotes: 5 <issue_comment>username_2: I am not aware of your location, but I can comment on the position in the UK. If a student (or academic or researcher) declares a disability to their institution then (together with clinicians or other appropriate evidence and an assessment of need) they are required by law (Disability Discrimination in Education) to make provision and adjustments (except for where the adjustment would be unreasonable). Only the courts can rule what is reasonable or not.
However, an individual may choose to keep their condition private, in which case the institution has no obligations until the point of disclosure. Privacy law overrides disability law in such cases.
I have responsibilities for students with alternate needs in my subject area, and this includes students with social phobia conditions. Some as extreme as you describe.
We have, for example, conducted oral examinations and tutorial using skype between two adjacent rooms so that the student does not have to share a space or be overlooked by another person.
It can be done, if there is a will and a motivation to do it on behalf of the institution.
**Edit:** however I also strongly concur with Stephan's answer. When a student has declared they are offered help and support. In particular CBT and other therapies are very helpful in putting students and staff in better positions for employment and promotion.
Upvotes: 4 <issue_comment>username_3: One alternative is to take an online course. There are many MOOCs that offer foreign language courses. You may then try and solve assignments and take exam of the real course to get the credit. Of course this will require some convincing to the authorities that you would like to skip attending because of the phobia. I should add that you may want to consider seeking therapy for your condition. I have heard positive stories of therapies working, even the rare cases as you describe.
Upvotes: 2 <issue_comment>username_4: >
> ... I paced about in front of the building trying unsuccessfully to force myself to enter the classroom. ...
>
>
>
I'll be honest with you. If you cannot get over yourself and force yourself to face your fears, there's little point in getting your degree, since you're extremely unlikely to land a job that requires a degree in which you could function.
If you don't want to quit, my advice is to deliberately seek out social occasions as often as possible. Face your fears and try to desensitise yourself to your phobia. It will be hard, but it's the only way.
Upvotes: 4 [selected_answer]
|
2015/04/30
| 1,778
| 7,375
|
<issue_start>username_0: After completing my dissertation proposal, I got data from a company for my idea. After that, I was searching to find an advisor for my dissertation.
I talked with an assistant professor. Let's call him Alex. He asked me to send all of the collected literature and the proposal to him. A few days later, he informed me that he could not accept me as his student.
I found another professor (Bob) and then began to work with him. However, my advisor forced me to give up the proposal without any clear reasons. So I needed to give it up because for a month, he kept asking me to find a different topic.
About 2 years later after the time, I found that my advisor and his former student (Carol) had been working on my proposal topic and began to submit several papers.
The story was that: His former student was in a **relationship** with Alex, whom I initially contacted before starting to work with Bob. Alex sent my stuffs to Carol and she contacted Bob to ask for his help to develop the idea.
Such a thing happened one more time as well. Carol was unable to find a topic and so Bob had provided my research ideas to her again.
This Fall, Carol tenure will be under review. I'd like to send a letter to dean of the department to inform this. However, I am unsure how to do that effectively. I appreciate any suggestions on this.
By the way, I do not work with Bob anymore. I had a chance to talk about this matter with Alex. He said that he just shared the proposal with Carol because it was an interesting idea. He did not know the other stuffs and why and how she worked with Bob. Finally, Bob said that it happened because Carol brought the idea to him before I met him.<issue_comment>username_1: Pragmatically, your best solution might be to just move on with your life. Not all wrongs have a good remedy.
It's not entirely clear from what you said about whether C did anything wrong, or whether it's directly relevant to C's tenure case. But even if C did something wrong, it might be hard to prove it, as it will be full of "he-said-she-saids" and "C should have known" and so on. (For instance, how will you prove that C knew that the ideas were improperly shared/appropriated from your proposal? How will you prove that your ideas are substantial enough that you should have been a co-author?)
And even if you were to prove it, there's a non-trivial risk that your standing in your community will suffer. You could become known as "hard to work with". Given how the process works, it's a near-certainty that if you were to send a letter to the dean complaining about this, word *will* get out.
Instead, there's a good chance that your best response is to simply move on with your life. When a collaboration with someone goes bad, the most effective remedy is often simply to not work with them again. Yes, it sucks, but that's life. The best strategy is to learn to be resilient, bounce back, and keep doing good science. Do good science, and people will come to respect you. Don't worry too much about others; focus on yourself, on being the best person and the best scientist you can be. It's natural to be angry and upset; my advice on that is to find a sympathetic friend and tell them how you're feeling -- take a moment to get it out of your system... and then move on.
Caveat: This answer is speculative. It's unlikely we'll be able to give you definitive advice, given the level of detail in your question. It will be difficult to give conclusive answers to your situation with any degree of certainty; only someone with detailed knowledge of the specific situation you're in can help you. Therefore, if you still have doubts, I suggest finding a senior member of the field who you trust and respect and talk privately with them to solicit their advice.
Upvotes: 5 <issue_comment>username_2: Move on and move far..
First of all, an idea is nothing in science. I know everything says the opposite in lecture rooms, but in practice an idea is just idea. When you work out someone's idea yourself (eg as a graduate student or a postdoc told by your prof), working on the details for several years, building up a whole research topic and writing several papers from a 5 min vague description that someone gave you - you will feel the same.
Second, it is very unlikely that a dean would even raise his/her eyebow, whatever you do. Unethical what A, B and C did if your description true? Yes. Happens often in science? Oh, you bet. Can you prove it? Do you have labnotes, with proper time-stamps and signature showing you made any preliminary work? Your story also contains a lot of allegations, juicy details, you cannot prove half of it and also makes A, B, C deny even parts of the story.
Can you argue that it is such an important issue that should automatically nill a tenure, or at least worth some serious disciplinary action? Even if you would have some evident, you have to prove some malicious happened, not an accidental negligence, so I hardly think. Also, there is a high chance that your dean / most professor in any disciplinary committee would share my opinion on the first point, even if you don't think so.
Third, and idea is just an idea. If you don't have 10 new every day, don't go to science. You may lost a couple of weeks (days?) of work, but going into battle for this will earn you nothing. Learn to keep a distance from people with questionable reputation.
Remark: anyone with a "-1" is welcome to actually comment or argue
Upvotes: -1 <issue_comment>username_3: In two of the answers, people just suggest you to move further and forget about what happened. I disagree with it and think that if you truly believe that people took advantage of you (in my answer I assume that you described the situation in a correct way), you should stood up for yourself and be firm.
I do agree with username_1's answer that your chances are not high, and not all wrongs have a good remedy, but I think that you have to report it because:
* how the next potential students of A and B know that they are not people who should be trusted?
* when you hear a shot and a sound of breaking glass in the night you call a police, not because you think that your information is enough to find a culprit, but because you do not want this behavior to continue. Whether they will be able to find a person who did it and whether it is really worth of opening a crime case is not your responsibility.
Sorry, but I am unable to give you a suggestion as to how approach this situation, but please do not give up.
Upvotes: 5 <issue_comment>username_4: I really feel for you.
Two or three well-positioned academic donkeys get an insightful - and potentially career-making - idea from someone else. First its value is denied. And then they just steal it from the submitter.
Proving that intellectual larceny occurred is almost impossible. And even where possible, university managements are more likely to protect the wrongdoing professor as they set more store on avoiding bad publicity than on vindicating truth.
Bring a witness (preferably one of the opposite sex, for social perspective) and seek a short meeting with the university rector/provost/president.
Without sitting down, briefly state the situation and leave details of the sinning department and professor with your host.
Then walk out promptly.
Upvotes: 0
|
2015/05/01
| 4,329
| 18,702
|
<issue_start>username_0: A friend of mine who is a CS grad student at a US university took a class in which the students were given an extra hour on a two-hour exam. The trouble is that this offer was made in the last five minutes when the professor realized the students were not able to finish answering all questions.
I am personally against this kind of offerings because it feels like they do more harm than good and reflects poorly on the professor's planning. Students plan their approach to solving a test based on the available time and often speed up in the last hour or 45 minutes to complete as much as possible. This certainly degrades the quality of the answers. Now when there is a sudden offer of extra time, many students will be confused on how to make use of this time. Ultimately, it will come down to the students who have better time management skills rather than those who really know better answers.
So, the question is: Is the practice fair and if not what a student can do about it?<issue_comment>username_1: As I've mentioned elsewhere on this site, students often seem to think in terms of "fairness", but upon sufficiently intense scrutiny the concept is so fuzzily defined that it may well be that the only "fair" grading scheme is to give all students the same grade...an outcome which would certainly be unacceptable to many students. Moreover:
>
> Ultimately, it will come down to the students who have better time management skills rather than those who really know better answers.
>
>
>
Any way of administrating a course favors students with certain incidental skills over others. Giving a prearranged, timed exam favors students with sufficiently good time management skills over students with very poor ones. And not only that, the students who live "on-campus" are advantaged over commuter students. Students who have a watch can look at it, whereas others need to twist around in their seats to look at the clock in the back of the room [if there is one], and so forth.
Every seasoned instructor I know would agree that most exams test subject-unspecific study skills as well as "real knowledge". Not to be too much of a downer, but the idea that a student has a well-defined "real knowledge" is a convenient reification, something that modern academic culture must believe in *approximately* in order to function but which it is dangerous to take *too* seriously.
The real issue is to design the exam experience so as to test a reasonable set of skills, weighted in a reasonable way. There is no universal way to do this: it is better to be as explicit as possible about what skill set you want your exams to test and look to see if they do what you wanted.
Anyway, a better question is: is this a good practice? I think the answer is **no**, at least in many situations. Here are two obvious issues:
>
> * An announcement which occurs five minutes before the end of an exam may come too late for students who have already left the exam.
> * Unless the cohort of students taking the exam has the identical academic schedule [this is prohibitively unlikely in many graduate programs at American universities], it is very unlikely that every student will actually be able to stay for the extra hour.
>
>
>
I would describe the above two issues as concerning "fairness". Any student who did not get the extra hour for either reason would have a very legitimate complaint.
>
> I am personally against this kind of offerings because it feels like they do more harm than good and reflects poorly on the professor's planning.
>
>
>
I agree that it reflects poorly on the professor's planning. Whether it will do more harm than good to the students' performance depends on the exam and the students. I agree that many or most students would find the experience of learning that they have 50% more time at the very end of an exam stressful, and many would be resentful that they would have used their time differently had they known this information at the beginning of the exam.
In summary: unfair? Yes, if certain things happen, otherwise maybe not. A good practice? No, I don't think so. It sounds like a rather inexperienced / sloppy instructor to me, honestly.
Upvotes: 6 [selected_answer]<issue_comment>username_2: My main concern would be whether all students really had the opportunity to take a extra hour. What about students who had something scheduled immediately following the regular exam time, such as another exam? This would certainly be unfair to them.
However, if the schedule was such that all students were available for the extra hour, this situation, although it's not ideal, is not one I would characterize as unfair, and I don't think a student would get very far trying to do anything about it. All students had the same opportunities. Any test is naturally going to have different impacts on students depending on their learning habits and test strategies, so saying it disproportionately helped or harmed students depending on their knowledge or strategies is not a sufficient objection.
You are right in a sense that it reflects the professor's poor planning. Ideally the exam would have been designed such that most students could finish in the originally allotted time. However, many students don't realize that this is much easier said than done. As a professor, occasionally your estimate of an exam's difficulty or length is way off, and you have to do damage control. There are only imperfect solutions to this, and adding extra time, if possible, is among them. I'd say this is a judgment call for the professor, who should take this issue into account when assigning course grades.
Upvotes: 5 <issue_comment>username_3: As already hinted at in the other answers, timed written exams¹ (and every other examination method) are inherently incapable of fully fairly assessing the qualities of interest in a student, as they will always also assess exam-writing skills, psychological robustness, time-management skills and similar. This does however not mean that one should not try to make them as fair as possible.
An important part of this includes having fixed conditions for the exam and informing every student about them beforehand, so they know what they are up to. Changing these conditions without a good reason¹ can *avoidably* increase the importance of skills that are not of interest. Moreover, it is likely to favour those who have exam-relevant skills anyway (and thus have an unfair advantage through the choice of the examination mode anyway, if you so wish). For example:
* Many students mentally prepare themselves for the exact exam conditions and changing them favours students who can adapt. The latter are mostly students who are not nervous about exams and thus advantaged anyway.
* There are several strategies to go through an exam, for example: Trying to attribute equal time to each task, risking leaving tasks half-finished; taking the easy tasks first; taking the more difficult tasks first and so on. Ideally (i.e., for an exam that is well adjusted to the given time), all these strategies are equally good in outcome; in a regular real situation, some of them are favourable, but at least you can decide the best strategy depending on your skills and psychology; radically changing the rules during the exam may strongly favour one strategy and thus give an advantage to those students who chose it (more or less at random).
Perhaps the above comes more clear with a different, more drastic example:
When I studied, one of the central and most difficult exams was looking like this: ⅔ of the points were attributed to Topic A; ⅓ of the points were attributed to Topic B; ½ of the points were required to pass; there were no grades. Given this situation, there were several viable, but entirely different strategies to approach this exam, e.g.:
* If you were good in Topic A, you could entirely focus on it and try to pass the exam without ever addressing Topic B.
* If you were good in Topic B, you could focus on Topic B and then try to obtain the rest of the points from easy tasks on Topic A.
When I took this exam, the tasks for Topic A were ridiculously difficult (but in such a way that you would not notice until after investing some time into the task), while the tasks for Topic B were rather easy². As a result, only one person or so would have passed the exam with the above conditions and the passing threshold was lowered to ¼ of the points. Due to the latter, the exam became passable by means of topic B alone and in fact many students passed by overly focussing on topic B, i.e., by pursuing a strategy that could be regarded as bad under the original conditions. Students who focussed on Topic A were strongly disadvantaged though. Moreover, even if you were equally good in both topics, you were randomly advantaged if you started with topic B, since most tasks of topic A were mostly a waste of time (but you could not tell without doing them).
Of course the situation described above is different from yours, but I hope that it illustrates how a strong change to the exam conditions can introduce additional, avoidable unfairness. In your example, I particularly see the following problem: Students who alloted an equal portion of the original time to each task are disadvantaged from those who only worked on selected tasks. The latter can just use the additional time to continue with the remaining tasks, while the former have to revisit their existing solutions, which is more error-prone and takes more time as they have to work themselves into the task again and correct existing stuff. Arguably, adjusting the grading scheme would have been the more fair solution.
So, to sum it up, I would regard the change that you described to be unfair in the sense that it poses an avoidable increase to the importance of non-relevant skills and luck. However, you should keep in mind that any other way of damage control with a badly posed exam would have the same effect and the most fair solution would have been not to change anything (probably resulting in disproportionately bad grades or failing rates).
As stated [in a comment by O. R. Mapper](https://academia.stackexchange.com/questions/44574/is-it-fair-to-offer-students-a-last-minute-extension-to-finish-a-test#comment99453_44574), some universities do not allow for such changes, probably for exactly that reason.
At my university, this is usually addressed by not fixing the grading scheme, so students know beforehand *how* damage control is going to happen (though even this may lead to unfairness in extreme situations such as my example).
---
¹ For example an external cause such as an unforseen and unavoidable major noise disturbance.
² To give you an idea: Despite being far more concise, the sample solutions for Topic A were ten times as long as those for Topic B.
Upvotes: 3 <issue_comment>username_4: This kind of extension is unfair, even if given to all students and even if all students are able to spend an extra hour in the room. It is unfair because it affects students differently based on an arbitrary criterion.
Suppose that Alice and Bob could have written perfect answers to the two-hour exam in three hours and they both realised that right at the start. Knowing she only had two hours, Alice decided to write perfect answers to two thirds of the questions, whereas Bob decided to answer all the questions but in a sketchy way that would score about two-thirds of the marks. When it is announced that there is a surprise extra hour, Alice can just use that time to write perfect answers to the last third of the exam but it is essentially impossible for Bob to rewrite his sketchy answers into full answers.
So, based on an essentially arbitrary decision they made at the start of the exam, two students who would have both scored 100% if allowed three hours from the start end up scoring 100% and, say, 75%. That is not fair.
It's also very unlikely that all the students can stay the extra hour. And what do you do about that student who has another exam starting an hour after your exam finishes? He has to do choose between disadvantaging himself in your exam by leaving early or disadvantaging himself in the other exam by not having a break before it and not being able to eat lunch.
Upvotes: 3 <issue_comment>username_5: I'm surprised so many people think that's fundamentally unfair; I've had professors add time to a test on several occasions that seemed to make the test *more* fair.
Example: one of my physics professors (in a Thermodynamics class) always scheduled our midterms (3 of them) for 7-9 pm on Thursday nights to make sure it didn't interfere with class schedules. He once added an hour to the end of a test because we were slower than he expected, and he did so with only a few minutes left. His reasoning was that he *made* us set aside two hours to guarantee that we could finish it *if we studied hard enough*, but he didn't care if we needed extra time to prove we could figure the material out -- **the test was about what we knew and could solve, not how quickly we could do it**. So if you couldn't stay the extra time, it wasn't his problem -- you would've done fine if you'd studied as much as he recommended, and you knew in advance what topics the test was covering. Basically, it was your own fault if you did poorly in the initial two hours, and not his problem if you needed more time but had to leave. In that case, it absolutely *was* a case of the class not studying enough -- we readily admitted it -- and it was up to the individual whether they wanted to use the time to review their work, finish/improve partial answers, or start problems they hadn't yet gotten to (or leave without using the extra time).
As for everyone complaining about people who'd already left: either they were already done with the test and satisfied with their answers (if I don't want to use my last ten minutes, why would I use an extra hour?) or they had no idea what they were doing so they gave up and left early. Either way, it's not the prof's fault. Example: another physics professor (in a Quantum Mechanics class) made a typo on a midterm once, making the problem unsolvable. Nobody pointed this out to him until we had twenty minutes left on a two-hour test, by which time several people had already left. He told us what the question *should* have been, and gave us an extra half-hour to make up for time we may have wasted on his mistake. Someone asked what he would do for the students who left early, and he responded that either they: figured out the typo for themselves, solved the problem, and left (unlikely); applied very incorrect math to wrongly solve the problem, thus demonstrating they had no idea what they were doing anyway; or they gave up early, in which case it wasn't his fault that they chose to forfeit their remaining time (and who would give up early if they thought they knew the material?). In no situation was it his problem that they weren't there to benefit from another student identifying the typo, so he wasn't going to give them more time. If they didn't have time to stay, it wasn't his fault -- as in the previous example, students who knew the material well enough never should've needed that much time to begin with.
I *do* see a problem in non-STEM fields, or on short-answer/essay tests. That would be unfair since it would give some students more time to proofread and otherwise analyze their answers when there never should've been a time management issue in the first place. But when the test is all about *do you understand these math/science concepts well enough to apply them?*, extending the time allowed for an exam can be very fair.
Upvotes: -1 <issue_comment>username_6: I presume, a fair exam is an exam that does not violate student's legitimate expectations.
An important expectation is that the same rules apply to everybody. On the one hand, the extension violates this expectation. As other have pointed out:
1. It privileges students with bad time management
2. It privileges students who are able to actually stay longer (and have no following appointments etc.)
On the other hand, it is also a legitimate expectation that an exam not be excessively difficult, that is, it should be doable as long the students have understood the material. If students fail the exam, then because they have been badly prepared, not because the exam was badly planned (for example by being too extensive).
If the professor realizes too late that the exam is too extensive, she is caught in a dilemma. Both granting and not granting the time-extension is unfair. Off the top of my head, I can think of two ways around this:
1. Schedule another session with all students to finish the exam (although there's still the time-management issue)
2. Be more lenient when grading the exam (this is probably the best solution)
Upvotes: 2 <issue_comment>username_7: The rules for a test as well as projects should be made before the test and not changed during the test. That being said, time limits on tests should only hold a portion of the tests overall value and the primary value should be placed on the answer. If the grader feels the test was poorly constructed, adjustment should be made during the grading process to assign the grade on the quality of the answer. And re-test if necessary. Education is first about learning, where punctuality is a desire of the wealthy. Edison took his time on the light bulb or he likely would have finished on time and we'd still be in the dark more than we are today.
As I don't have a good enough reputation I'll comment on [Professor Clark's](https://academia.stackexchange.com/users/938/pete-l-clark) comment here in my edit.
I was once told "You can write the history of the world on a postage stamp. To do a descent job of it would likely take as long." Unless there is unlimited time or the grade is based solely on the quality of answer, the allotted time for a test governs the quality of the answer. By changing the time available to turn in the test at any given time after the test has begun, the instructor hasn't really changed the original test, rather he/she has created a second test and the combined tests should be graded accordingly. For the most part, our brains are stupid like computers, they work simply on available inputs. As inputs change, they do their best to re-factor. I don't believe many instructors, other than those in theatre, instruct on improvisation, which would be adjustments to inputs similar to an extension of time for a test. Maybe there is more value there - Teaching how to improvise!
Upvotes: 2
|
2015/05/01
| 723
| 3,254
|
<issue_start>username_0: I work as a software developer in industry, but I like to read various academic papers to get exposure to ideas that may benefit my work. I may not apply exactly the technique described in a single paper, but my work is certainly influenced by the ideas I've encountered. My question, though, is what is the proper way to provide attribution for the papers that have influenced my work?
If the product of my work were an academic paper, the rules would be quite clear, but here I'm making a product that is ultimately my employer's IP, for which the implementation details may even be considered trade secrets. I'm struggling to find the proper ethical reconciliation between my professional obligations and the spirit of academia that favors free dissemination of knowledge in the hope that it encourages more of the same in return. I don't want to steal anyone's work, but neither do I want to shut myself off from valuable lines of thought.<issue_comment>username_1: You don't need to do anything. Academic journals are published for the betterment of mankind. If you acquired said articles legally, then you are entitled to use their contents within your corporation as you see fit as long as you do not directly copy and distribute text or software code that is under copyright without a license. If you reimplement an algorithm described in a paper, you are fine. The only problem may come if you try to implement something covered under a patent (like the RSA patent), but these kinds of things are not typically published in journals these days.
There are more reasons that academic publish than encouraging more published work. When I publish, I hope that my ideas are used by any and all to fulfill their own goals. When I want specific kinds of returns on my work, I will choose other avenues of dissemination than publishing in conferences and journals.
There's no ethical issue here, in my opinion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using published work is not stealing it, so rest easy on that score. This is particularly the case with algorithms, which in theory cannot be patented though [the practice is a bit muddier](http://en.wikipedia.org/wiki/Software_patent). You are also not bound by the formal stricture of citation, because you are not writing a scientific paper.
As a practical matter, however, you should provide citations. The reason you should provide citations is because an idea obtained from a scientific paper generally has some subtlety to it, and somebody going over the code needs to know that there's some precise reasons that it's set up in the way that it is---whether that's a co-worker who's never read the paper, or you in six months when you've forgotten its details.
So put citation-style references in the comments relevant to the code. For example, if it's an algorithm, put it in the class file that implements the algorithm; if it's an architectural idea, put it in the README that describes the architecture choices you have made. Give the next programmer a trail back to the source of the ideas so that they are less likely to accidentally break everything by "improving" something to violate a subtlety of the work that is being applied.
Upvotes: 2
|
2015/05/01
| 1,430
| 5,987
|
<issue_start>username_0: I have written an article in the field of Information Systems and I would like to publish it in a journal. I downloaded the Scimago journal rank, and decided to publish in a journal called [Journal of Theoretical and Applied Information Technology](http://www.jatit.org/index.php). This journal appears to be in the quartile Q3 of Scimago, but one thing that caught my attention is that they publish three volumes per month. Each accepted submission must pay a cost of 300 USD.
When I was searching for the reputation of this journal, I found that it is listed in the famous or infamous Beall’s list. My questions are:
* Why does this journal appear on Scimago if it is a predatory one?
* Would it be a good idea to publish in it?<issue_comment>username_1: Beall's list is grounds for high suspicion, not a ban. In the case of this particular journal, it looks like not a very good journal, but not an obvious scam. Google Scholar finds a number of articles with moderate citations, and on first inspection they don't look like metric gaming, so it looks like it wouldn't be a black mark on one's record.
Bottom line: probably legit, but if you've done good work isn't there somewhere better than you can publish it?
Upvotes: 5 <issue_comment>username_2: I see several red flags, which don't prove the journal is bad but make me very suspicious. At the very least, the journal is run in an eccentric way.
1. At the bottom of the [editorial board page](http://www.jatit.org/Editorial_Board.php), it says "You can join the elite panel of JATIT as member technical editorial board if you hold a PhD in computing and have at-least 10 publications in International Journals/Conferences. Please drop your CV at managing\_editor at jatit.org. Members lists and requests are reviewed at the end of every year in regional advisory panel meeting." Of course this doesn't guarantee everyone who applies will be accepted, but it strongly suggests that they feel a PhD and ten papers is a reasonable criterion for being an editor. No mainstream journal takes such an approach, and it raises the question of why they would do this. One possibility is that the publisher wants to publish as many papers as they can (to increase profits) and is willing to accept just about any editor who might help with that.
2. The papers show that copyright is held by JATIT & LLS, which is worrisome given the publication fee. It's common for open access journals to charge a fee but make the paper available under an open license (typically a Creative Commons license) for free distribution and use by anyone. Instead, JATIT owns the papers and can put whatever restrictions they like on them (including changing their policies in the future, for example to put the papers behind a paywall). They don't seem to be abusing this power, but they could if they wanted to. I see no good reason for this approach. It suggests that the publisher either doesn't know how gold open access journals generally work or is deliberately taking a different approach, and both possibilities are worrisome.
3. When I flip through the [published papers](http://www.jatit.org/volumes.php), they look very diverse in topics and approaches. Is the editorial board capable of handling such diverse papers? I don't know, but I doubt it: it's really difficult to handle a submission that falls outside your area of expertise. The easiest way is to apply low standards, which I'd bet is what happens here.
Some journals clearly look fraudulent when examined carefully, and there's no evidence that anyone is actually trying to run a real journal. That's not the impression I get from JATIT. If I had to guess based on admittedly insufficient information, I'd guess that the editors are trying to run a real journal while the publisher is trying to make money (which is not bad in itself but creates a bias towards publishing lousy papers).
As for whether it's a good idea to publish there, one big issue is how it would look. When I see a paper listed in an unknown journal on someone's CV, the first thing I wonder about is what the worse papers published there are like. (Do they regularly publish junk, or does publishing there demonstrate that your paper meets a respectable professional standard?) When you flip through JATIT, do you see papers that look worse that you think yours is? If so, I'd be wary about publishing there.
Upvotes: 5 <issue_comment>username_3: Legit journals charge open access fees to make your article available in order to get more citations. My University has $3000 dlls a year grant for open access publications, it is cheaper to our university to pay fees than having to subscribe to paid journals. JATIT is legit, I have published few articles there, it is also listed in the [Australian ERA Journal list](http://www.arc.gov.au/era-2015-submitted-journal-list). It is indexed by Scopus that is the minimum standard for credible journals. JATIT is a low ranked journal but credible enough. To those people that ask why should we pay money for a low ranked journal, the issue is time, many journals take two years to get something published while JATIT can get your publication published in 3 to 6 months. Many of us are on a time clock for tenure and cannot afford to wait 3 years for a publication so we rather pay. Also, we need citations and low ranked closed access journal generates hardly any citation.
In few words, JATIT is OK but I wouldn't bank only on this journal to get tenure. It is fine to have few articles from this low-ranked journal but you need better to be considered a credible researcher.
Our University does not use Beall's list, the main criteria for evaluation is indexed by SCOPUS at least but preferably by ISI Thompson rueters. The Beall list is very biased, JATIT is in Pakistan so anything that comes from India, Pakistan, China, etc., that charges money goes to this list right away disregarding indexes, impact factors, etc.
Upvotes: -1
|
2015/05/01
| 2,393
| 7,811
|
<issue_start>username_0: I just graduated in aeronautical engineering (master of science) at the Polytechnic of Milan. In the Italian university system, students who finish their major get a final grade that that goes from 66 to 110 (plus honors). In my particular case, I got a 103/110.
Since I am writing my resume/CV in English, **I would like to convert my Italian final grade to the American (GPA?) and U.K system**.
Can anyone suggest me what conversion I should follow?<issue_comment>username_1: To be most accurate, you'd need to convert grades at the course level, weighting by credit hours, where, e.g., a strong grade in a longer course is worth more than it is in a shorter course. You could also ignore credit hours, or assume they're roughly the same across courses.
Either way, one approach is to convert the course grade or final grade on the Italian scale to a simple proportion of points earned over points available. You can convert to GPA based on the proportions for each score in the four-point scale. Wikipedia shows this conversion table for percentage to letter grade and grade point:
<http://en.wikipedia.org/wiki/Academic_grading_in_the_United_States>
Once you convert your Italian points to a proportion or percentage you should then be able to convert to any other grading system. There are apps online that will run the conversions for you. Here's one to try:
<http://www.foreigncredits.com/Resources/GPA-Calculator/>
Upvotes: 2 <issue_comment>username_2: For the UK system, a first class degree is [generally awarded to those who achieve over 70% of the maximum mark](https://en.wikipedia.org/wiki/British_undergraduate_degree_classification#International_comparisons). I think having got 103/110 you can safely conclude your degree is equivalent to a UK first. On the other hand, I think you could probably just explain how the grading works (perhaps in your cover letter) and anyone would understand that this is an excellent mark.
Upvotes: 2 <issue_comment>username_3: Don't even try.
---------------
There is no "American grading system". There is a fairly common 0.0-4.0 scale for "grade-point averages", but the meanings of those GPA differ significantly among different US universities, among different departments at the same university, and in some cases, even between different instructors of the same course.
The only reasonable way to judge what Italian grades say about your potential for graduate study is to compare them against other Italian grades.
Upvotes: 4 <issue_comment>username_4: I would not do a conversion, I would just give your score and the total possible score, as you did in your question.
Upvotes: 3 <issue_comment>username_5: I've been through this for a while, searched official sites and so on, and I came to the conclusion that Italian **grades are underestimated in the UK**. I explain you why.
**Note:** for *simplicity* I will speak about exam grades since the graduation score in Italy is given by the average converted in 110th plus a *variable score* (depending both on the university and the course).
Both in USA and UK, scores are in percentage.
**UK:**
* First class (1st): 70-100%
* Upper second (2.1): 60-69%
* Lower second (2.2): 50-59%
* Third (3rd): 40-49%
**US:**
* A (4.0): 93-100%
* A− (3.67): 90-92%
* B+ (3.33): 87-89%
* B (3.0): 83-86%
* B− (2.67): 80-82%
* C+ (2.33): 77-79%
* C (2.0): 70-76%
* D (1.0): 60-69%
* F (0.0): 0-59%
Now to Italy. Italy has exams votes in 30th, the minimum is 18/30 and the maximum is 30/30 plus honors. So it is:
**Italy**
* A-, A, A+ (Excellent): 27-30/30 and 30/30 plus honors
* B-, B, B+ (Good): 24-26/30
* C-, C, C+ (Satisfactory): 21-23/30
* D-, D, D+ (Barely passing): 18-20/30
Now, if you convert in percentages like the US system (as both have minimum at 60%) you get:
* A (Excellent): 90%-100%
* B (Good): 80%-89%
* C (Satisfactory): 70%-79%
* D (Barely passing): 60%-69%
That **makes sense**, right? In fact, if you convert between IT and US, you get the percentages above. *But what happens if you convert IT to UK?*
Well, it's kinda funny. As I said before, *final score is made from the conversion in 110th PLUS a variable score*. Now, let's pretend for a moment that you just have to convert the vote in 110th, according to math you have:
* A (Excellent): 99-110/110
* B (Good): 88-98/110
* C (Satisfactory): 77-87/110
* D (Barely passing): 60-76/110
I want to remark again that the score is lower than it should, because you should add points for your graduation. For my university, the points added were between 1 and 3, but other universities gives more or less points.
If you check requirements to enroll to an university, for example Manchester, you are required a score of **100/110** (link below).
If you are from USA, you are required a **GPA of 3.0**.
Too bad that if you convert from IT to US, you get that a **GPA of 3.0 you need 25/30 or 89/110**.
If you are from UK you need a 2:1, which is the correct translation for the US score but not for the IT.
*Nice job UK.*
Requirements for Italian students: <http://www.manchester.ac.uk/study/international/country-specific-information/italy.htm>
Requirements for USA students: <http://www.manchester.ac.uk/study/international/country-specific-information/usa.htm?page=2>
Requirements for UK students: <http://www.manchester.ac.uk/study/international/english-education-system/>
Upvotes: 0 <issue_comment>username_6: What most people fail to understand is that a 70% in the UK scale does not mean you only got 70% of the exam correct. A score above 70% is something outstanding that very few people manage to achieve, that's why it's considered equivalent to much higher percentages in other countries. If you get everything correct on an exam you will get a 69% (happened to me). To get a score higher than 70, you have to not only answer the questions perfectly correctly but also insert new knowledge into your answers that you obtained from reading outside the course material. I have experience in both the portuguese and British system and I can tell you that a 70 on the UK scale is not your typical 70%. I would say it is equivalent to a 90% in the portuguese scale. Therefore, I don't think it's unfair that they ask for higher requirements for students from other countries.
Upvotes: 0 <issue_comment>username_7: The Italian equivalent of 2:1 is actually 100/110 - 104/110 according to Ca' Foscari University's website.
However, in the University of Torino, a 2:1 (3.5-4.49 on a 5 point scale), translates to 70% - 89% on the 100th and 77-98 on the 110th. I was denied admission because I didn't score 90% on the 100th or 100 on the 110th.
<https://www.unive.it/pag/fileadmin/user_upload/inglese/pdf/Instructions_grade_conversion_cumulative_percentages.pdf>
Upvotes: 2 <issue_comment>username_8: I think it is impossible to do the conversion. UK universities don't value students with a foreign bachelor's degree as much as those that obtained one at another UK university. Some universities only accept students with a 110/110 grade, which is much harder to obtain than a First (you only need an average of 70%). They underestimate grades from Dutch universities such as TU Delft, which is arguably better than any engineering university in the UK.
The grades vary a lot from university to university. For instance, at my university, the Polytechnic of Turin, the average grade for my course last year was 97.1 for a Bachelor's degree. In another Italian university, it was 101.3. Also, in response to another comment, "38% of the graduates get 105/100 or more" - that takes into account master courses where it is easier to earn a higher grade.
Edit: I see that some people were upset but having studied in both countries I have a better understanding than most.
Upvotes: 0
|
2015/05/01
| 1,415
| 5,695
|
<issue_start>username_0: Of all the statistical factors that are used for judging publication record, the h-index seems to be the most commonly used
[Wikipedia](http://en.wikipedia.org/wiki/H-index) says
>
> Hirsch suggested (with large error bars) that, for physicists, a value for h of about
> 12 might be typical for advancement to tenure (associate professor) at
> major research universities. A value of about 18 could mean a full
> professorship, 15–20 could mean a fellowship in the American Physical
> Society.
>
>
>
I am an organic chemistry with an h-index of 16. I assume physics should be similar to (organic) chemistry. I am now applying for a tenure-track position.
I mean to be overqualified for research funds, academic and scientific honors, etc.
### Questions
* How important is the Hirsch index (h-index)?
* How much is the h-index really relied upon?
* Can the h-index be used to categorize yourself?
* Can I really set a goal that by reaching h-index 20, I am at the level of fellows of my professional society?
* Can the h-index be used to indicate whether I am ahead of my rivals?
* Can we claim something by h-index or is it just a number?
* What should be the h-index of assistant/associate/full professor in chemistry to be a leader of his own rank?<issue_comment>username_1: As with all bibliometrics, the h-index is indicative at best. There is no magic number that says "now give this person a promotion", but if you have an h-index twice that of your colleagues, it might suggest something interesting.
If you really want to use the h-index to see how you compare to other organic chemists, why not look at the h-index of your colleagues, collaborators, or of researchers at the department you're applying to? You will already have a sense of their relative position, you'll be familiar with their work, and it'll help you get a sense of what the h-index might mean in your specific field - as well as the amount it varies between individuals you'd think of as comparable.
(Make sure to calculate all h-indexes using the same citation data, though. You'll get confusing results if some use Web of Science data and some Google Scholar...)
Upvotes: 4 <issue_comment>username_2: Indeed it differs from one field to another.
I think the only way you can know how high an h-index of a successful academic should be, is to examine the h-index of academics that you already know to be successful.
As for the logic behind the h-index, from <http://mkhamis.com/blog/whats-an-h-index/>
>
> So why is this a better way to evaluate the impact of an academic or a venue than simply counting the number of publications or the number of citations? Well, if it was based on the number of publications, you could just publish a lot of papers at venues that accept everything.. If it was based on the number of citations, you could have 1k citations because of a small contribution to someone else’s paper, that resulted in you being a co-author. In the latter situation, it could be that your impact is not strong after all, perhaps the rest of your publications have very few citations (or none). If that was your only publication, your h-index would be 1.
>
>
>
Upvotes: 3 <issue_comment>username_3: I'm in physics. I hit an h (as computed by inSpire) of 16 while I was still a postdoc, and I was running low compared to my colleagues in the same sub-field (experimental particle physics) whose careers were going ahead faster than mine. On the other hand, most of my theory colleagues at a similar place in their careers were far behind me in h.
**Lessons:** (A) It might have meaning in comparing two people in essentially the same sub-discipline, but you simply can't make comparisons across narrowly constructed field boundaries. (B) The numbers in the Wikipedia article are too tightly constrained and not broadly applicable.
If you insist on using bibliometrics to compare candidates you *need* to rate each one in terms of their progress relative their peers as closely defined as possible. That is a lot of work, so it is not for lazy people.
Upvotes: 3 <issue_comment>username_4: The first giveaway is "**with large error bars**". The implication is that there's a weak correlation between career advancement and h-index. Even quite closely related fields will have different publishing patterns, for example:
* The typical size of groups working on a project (~length of author list affecting number of papers published per author)
* Whether the journal(s) most popular in the field prefer a few long papers or quick publication of results (affecting the number of papers per project)
* Those journals' authorship standards.
* The typical referencing style of a discipline -- every related piece of work, just those actively discussed or somewhere in between. This may also be affected by reviewers' expectations/demands and is likely to lead to more citations in work on the boundary of disciplines.
* How well-indexed the main publications are in your field, by a particular tool (Mine differs significantly depending on whether I ask Google scholar or ResearcherID. This is in a field where journals are the main route to publication. It may indicate how reliable the calculation is.)
So h-index *should be* closer to a game than a benchmark, and therefore *should be* of little-to-no relevance in hiring decisions. Besides, the more widely these sorts of index are relied upon, the more people will get onto author lists, and the more self/buddy citations will occur. I don;t mean anything that clearly crosses ethical boundaries, just the error margin in whether someone's contribution is worthy of authorship or citation.
Upvotes: 2
|
2015/05/01
| 1,380
| 5,868
|
<issue_start>username_0: I have recently been offered a position for a "postdoc" (fixed term level A position) in Australia.
In the British/Australian (and European) academic systems, career progression is very different from the United States, as outlined in this [answer](https://academia.stackexchange.com/a/16687/4572).
In my field (a subset of engineering) we have an authorship order for publications based on contribution level, with the last author being the PI/Advisor of the lab (receiving a lot of credit in the process).
In the British/Australian systems, there is an *expectation to informally supervise a few students* for post docs. I have been informed that I will be expected to help supervise or actually supervise students and perhaps obtain my own funding as well. In the US most post docs in my field exclusively focus on research, and nearly always get first author.
Additionally, if I'm moved to a B and/or C level academic (sort of like assistant and associate professor respectively here in the US) this supervision and funding responsibility will increase. This part is similar to the United States for a tenure track associate professor.
The difference between the US and the UK/AUS systems is that at the B academic level, I still would not be a professor, and I would have a PI above me. This brings up the authorship question...
**Given the different structure between the two systems, does the UK/AUS system dilute the competitiveness and career advancement of junior academics in the UK/AUS system, especially if they intend to migrate back to the US academic system?**
The reason that I ask is because in the US, once you are an assistant professor, you will get last author on any papers produced by your lab, but in the UK/AUS system, there may be a professor in charge of junior academics (assistant and associate professor equivalents) who would instead receive last author. This then dilutes the rating of the junior academics among their peers, because they become a middle author instead of the last author.
If this isn't the actual practice, please correct me.<issue_comment>username_1: Academics in Australia are usually on temporary contracts or on continuing contracts. Levels, however, don't directly relate to job permanency in Australia. For example, it's possible for an experienced postdoc (temporary, employed on PI's grant) to be paid at level B (starting rate is typically level A; step 6) and it's also possible for an academic to be appointed at level B for a continuing position (similar to tenure track). Although some level Bs will be on continuing positions, it's also possible for an academic at level C to not have a continuing appointment (e.g. a Future Fellow whose dept. has not committed to support him/her after the fellowship runs out). Yet they are still at level C because that is the appropriate level for their career stage (i.e. IF they had a permanent position, they would be appointed at level C). Most of these people, however, can leverage such a fellowship into a continuing position and start their own lab. This is similar to starting a tenure track position in the USA.
I think it's best to think of levels as representing *career stage* not the type of job (i.e. the level does not determine whether an academic is on a temporary or permanent contract, especially at levels A/B). You don't rise through the levels from a temporary contract to a permanent position - you must separately apply for a continuing position. This is analogous to how postdocs must apply for tenure track positions in the USA.
With regards to authorship, I think this is very group dependent. Typically postdocs run their own research and get first author while also supervising honours/PhD students and getting a second/middle author paper for that work. The way you describe this in your question (postdocs getting middle author papers but few/no first author papers) does not fit with my experience in Australia but it might occur in some groups. You could, however, find groups like this anywhere in the world. This is something you need to discuss with your potential advisor.
Note: this post has been heavily edited from the initial version to address questions in the comments.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> The difference between the US and the UK/AUS systems is that at the B
> academic level, I still would not be a professor, and I would have a
> PI above me.
>
>
>
This largely depends on the type of appointment you get.
Since your appointment is a fixed term position, it means that the funding for the position probably comes from a research project and there is a PI (probably a professor or an associate professor) who is managing it / acquired the funding. This would make you a Research Associate and your position is pretty much tied to this person who manages the funding and I would not be surprised if he/she is the last author of the publications.
At the same time, Level B is the appointment level for a Lecturer in an academic position, teaching only (quite rare) or combined teaching & research (most probable). At this appointment, where you are also expected to teach throughout the semester, you are part of the operating fund of the School - Faculty and you will not have someone as a direct supervisor except the Head of School. You will probably have others in the same broad area of expertise but not someone as the PI in the previous case.
Also levels are used mainly for salary and promotions. Your initial appointment is at Level A, Step 6 (A6) and each year you move one step to A7 and A8. After that you have to apply for a promotion and move to Level B.
These are, in a broad sense, the dynamics of level B appointments (you are either "research" or "academic") and this will probably define authorship.
Upvotes: 2
|
2015/05/01
| 645
| 2,791
|
<issue_start>username_0: I've recently published a paper. It's available online, but not published on hard copies yet. What should I do If I find out that there is a typo (repeated many times) in a recently published paper of mine at Elsevier?<issue_comment>username_1: Contact the editor you worked with and let them know. They will advise you of your options and hopefully work with you to update the online version and (if possible) either delay or fix the print version.
Upvotes: 5 [selected_answer]<issue_comment>username_2: There are two pieces of information missing in your question, but it might be better to consider each case anyway, so that my answer might help others. The first thing is whether the typo may lead to misinterpretation of your work, is an error in formulas, or anything that will harm the scientific meaning of your work. The second thing is whether you or the publisher introduced it.
**If the typo is not misleading in any way** (e.g. embarrassing but harmless spelling error), it is probably not worth doing much, whoever made the mistake. If the preprint version of your article is on your web page or on a repository, you should ensure it is free from the typo, and you might want to point out that the published version has this typo, but I would not see the point of going further than that.
**If the typo is actually harmful**, may cause misinterpretation of your work or may make it partly unintelligible, then who made the error is more important. In any case, contact the publisher and the chief editor to see how to handle it. Of course, if you did the error, you have to be apologetic while if the publisher did, you can be more demanding. The most probable outcome is that an errata will be issued. Most journals have only one version of each article, and thus cannot afford to change anything between electronic publication and print publication; however strange it may seem I saw many errata being printed in the same issue than the article they correct. Maybe electronic born journal are in a position to handle this better.
---
For the story, my very first article got published (both electronically and in print) with a repeated "typo": half a dozen of $n/2$ and $(n+1)/2$ where replaced by $n^2$ and $(n+1)^2$. I realized that when I received the offprints, and could not believe it. I knew I did not made the mistake, but at first I thought I did not checked the galley proofs closely enough. It turned out that the typos where introduced *after* the galley proofs. This made a strong case, and a corrected version of the article was reprinted entirely in a subsequent issue.
I think it fair to mention that the publisher was Springer, but I have heard similar horror stories from most big, expensive commercial publishers.
Upvotes: 3
|
2015/05/01
| 1,330
| 5,740
|
<issue_start>username_0: These days, many scholarly journals are emerging. Since they are not famous or high impact, they look for less famous scientists to fill their editorial board. This scheme fits many mid-level professors who are satisfied with their routine job, and just need some scientific recognition for official purpose.
However, if you have a big plan for your future, will you join the editorial boards of these journals? When you become a renowned scientist and joined famous journals, will you regret this past decision? Or this is a community service, and we should help new journals too? (Renowned scientists are busy, young scientists should take the job.)
In other words, can serving a low-quality journal as a member of its editorial board be considered as a weak point in your academic career?
I am moreover interested in the case, if a journal is listed on Beall’s list of predatory publishers.<issue_comment>username_1: I can think of many reasons to become involved with legitimate even if currently lower-tier journals. For example, you may find the editorial experience a useful way to network. You may gain better insight into the publishing process. If a journal launches in exactly your specialty, where no journal existed before, you may have every reason to contribute to making it a success--and it may not stay lower-tier. These positions may not look great on your CV, but should not be an embarrassment either. The main issue comes down to how much time it is wise to devote to such endeavors.
However, I cannot see any advantage to becoming affiliated with a predatory [Beall's list](http://scholarlyoa.com/publishers/) journal. Even if a few top-notch people have agreed by mistake to serve on the ed board (or have been listed without their permission, as sometimes is the case), having your name there makes you look inexperienced or gullible. Few if any of these journals are going to persist in the long term, so it is not like you're getting in early on a good thing. Listing editorial positions with these journals on your CV simply suggests you have nothing more positive to describe. And finally, these journals are exploitative of science and of scientists. Why would you wish to lend your name to that?
**Disclosure**: I am on the editorial boards of three journals, all open access: two top-tier and one mid-tier (from a top university press) that I would like to see become top-tier.
Upvotes: 4 <issue_comment>username_2: Joining an editorial board is a form of endorsement as well as a service to the community. You should never become an editor unless:
1. You understand how the publisher and journal operate, and you have good reason to believe the operations are competent and professional in every way.
2. You know the other editors, at least by reputation, and are certain that they are actively engaged in running an academically respectable journal. In particular, you have talked with other editors about how the journal is doing and what is involved in joining the editorial board.
3. You honestly believe that publishing papers in this journal is good for the research community as well as the authors (and no well-informed person could describe it as junk, corrupt, predatory, or exploitative).
If you know enough to be sure all three conditions hold, then it doesn't matter what Beall says. If anyone questions the respectability of the journal, you can convince them they are wrong. Over time, the reputation should improve.
If you aren't sure, then you need to investigate further. You have no business joining the editorial board of a journal you aren't prepared to endorse. (I'd even go so far as to say it's unethical to lend your reputation to a journal that doesn't deserve it.) If the journal is predatory, then being an editor will look bad.
When I've joined editorial boards for journals I know well as a reader and author, I've still had discussions about expectations for editors and how the journal works from the inside. Becoming an editor is a substantial decision that should be based on careful consideration. Nobody will be offended if you have questions or just want to talk. (At least, if they are offended, then you shouldn't trust them.)
There's also the question of whether becoming an editor of a low-prestige journal looks bad, assuming it's not predatory but just publishes below-average papers. One key question is whether the journal publishes papers you are interested in, papers you or other people you respect consider worth reading and citing (even if they aren't exciting or important papers). If so, then being an editor sounds worthwhile. If not, then what's the point of being an editor? If the papers aren't worth reading, then listing it on your CV risks making people think "This person either has low standards or is willing to do pointless work just in order to be an editor." That's not a disaster, but it's not a particularly flattering assessment.
Upvotes: 5 <issue_comment>username_3: Yes, it's bad. I was invited to join the editorial board of a journal. I looked up the other people on the board, many of them well-respected scholars. So, I agreed. The first paper they asked me to review was a surprise -- I learned that the journal did not practice blind reviewing. Then I learned that the publisher is on Beall's list. I immediately resigned, and was relieved to be removed from the board and the website. I also removed all reference to it from my CV.
It's pretty well known these days that these predatory journals are proliferating. They have a terrible reputation among scholars, their editorial practices are shady, and there is no benefit to being associated with them. In fact, it will hurt you.
Upvotes: 2
|
2015/05/02
| 2,762
| 11,050
|
<issue_start>username_0: I received a mail today from Academia.edu (a site I wasn't previously aware of), asking me to confirm that I co-authored a paper with a colleague.
Having looked into it a little it sounds like it might be a useful site - the idea of a "social network for scientists" is one I've seen the need for in the past. However, partly due to bad experiences with the seemingly similar ResearchGate, I'm also skeptical.\* Without signing up for an academia.edu account the site doesn't offer much information, so I would like the following information:
1. What specific features does academia.edu offer to its users?
2. Is it genuinely useful for any of the following purposes (each of which seems genuinely needed)
1. as a platform for networking with academics
2. for discovering relevant research
3. as an effective system for post-publication peer review
4. for organising references among a small team of people working on a project
3. Will it send out mails to my colleagues without my express and explicit permission? (I.e. are the mails I received today the result of a deliberate action by my colleague, who is aware that I will be emailed and wishes me to join the site; or are they essentially spam from a social networking site aggressively trying to expand its user base?)
4. It's clear from its [Wikipedia page](http://en.wikipedia.org/wiki/Academia.edu) that it's a private, venture-capital funded company. What is its business model?
In short, is this a site that has some genuine utility for academics, or should I just ignore it?
\*I've never signed up for ResearchGate but I regularly receive spam from it purporting to be from my colleagues, who aren't aware that it's being sent on their behalf. I would be mortified if my senior colleagues received such mails claiming to be from me, so I won't touch it with a barge pole.<issue_comment>username_1: Having an Academia.edu account (which I'm pretty sure I've had for around 2+ years), I can say that I am not real satisfied with the service. My particular field in social science has a relatively small online footprint, and I thought Academia.edu may be starting to have a big growth when I signed up, but it still has never really caught on with any more than a small minority of my field. This point will come up again in my responses, in that if Academia.edu has more infiltration into your field it could work slightly better.
So for 1, you can peruse the site and see for yourself. Basically a profile page where you can post your CV and other links if you wish, and then upload pre-prints. You can then assign tags of interest to follow for yourself, and follow specific colleagues. Using these links, it has a front page feature, similar to Twitter/Facebook/LinkedIn, but the page is filled with pre-prints of people in your network and of the tags you follow.
The upload of papers is pretty wide open (there is no quality checking), and I've seen people starting to upload syllabi as well. I've stopped uploading pre-prints because of the crazy [terms of service](https://academia.stackexchange.com/q/16050/3). Unfortunately, some anecdotal evidence of my papers suggests that their promotion of papers across the network (and with search indexing) is less rigorous when you don't upload an actual pre-print. Also I've always been a bit annoyed I can't just upload a bibtext snippet to fill in the meta-data.
For 2.1 and 2.2 it would work better if there were more uptake in the field. IMO conferences work pretty well for 2.1, and Google scholar works pretty well for 2.2.
2.3, post publication peer-review is no, it does not offer comments on particular papers. 2.4, managing a bibliography among a research group is no as well (see [these responses](https://academia.stackexchange.com/q/854/3) to that question). It does offer a bare-bones email type system, but that is obviously no better than email to begin with.
3, I don't know specifically. When you have co-authors you do need to verify them.
4, I don't know - it is similar in functionality to LinkedIn and Facebook, so I presume the same type of business model just a different user base.
In retrospect I would not have signed up for an account. I'm happy with posting pre-prints to SSRN, and there are similar sites for a variety of different [disciplines](https://academia.stackexchange.com/q/84/3). If you want a barebones free personal online website it kind of works for that, but I enjoy having a free wordpress blog that does the job and I have much more control over the content and format of the site. Google scholar works quite well for finding content. Strong networking ties in my experience happen more in conferences and just naturally being in the field over a time period. "Friending" someone on Academia.edu is a bit superficial.
Upvotes: 4 <issue_comment>username_2: In my experience, Academia.edu is useful mostly for [2] - discovering relevant research. [1] happens mostly through participating in feedback sessions for unpublished manuscripts. Definitely not for [3] and [4], as they are not supported.
Usefulness for discovery comes in 3 ways:
a) The **home News feed**: you will find out about:
1. publications of people you follow;
2. papers read by several of the "people connected to you" (those you follow and also their followees), or papers recommended by one person connected to you;
3. activity in manuscript feedback sessions. Of course, the quality of the feed will depend directly on how many people you follow and how relevant they are for you. If you follow people whom you know but about whose research you don't care much then your feed will be boring, especially because the 'people connected to you' that they will introduce will be even less relevant to you. Just follow people you really want to know about.
b) Navigating through **topics** of interest: each topic allows one to see a list of researchers who follow that topic and, more interestingly for me at least, to see a list of papers recently posted under that topic. Keep in mind that topics are user-defined, thus there may be several similar formulations for each subject that you would consider following (such as "humor", "humour", "humor studies", "humour research", etc); it is best to follow them all, as they have partially overlapping communities.
c) Mixed **text-author search**: jumping from texts to their author's profile, then to other texts by the same author or her co-authors, and so on.
Unlike Google Scholar, where one searches and navigates in a universe of ranked text lists and interlinked texts, Academia.edu introduces the researcher profile as a "bridge" that connects different publications. It is up to you how much you will enjoy this new mode of transport, so to say.
I find it useful to check publications from authors I like, publications I would most likely not have found through a keyword-directed search on Google Scholar. While you could also use Google Scholar profiles to this effect, I find that Academia.edu is more pleasant to look at and also has more opportunities to connect texts with author profiles as you scroll in the news feed or in various lists. For me this **text-author jump** is the most useful feature of Academia.edu.
d) Last but not least, I find it energizing to look at texts which have been recently written or read by a living person. It gives a **human touch** to the entire enterprise. Of course, it also anchors literature searchers firmly in the here-and-now at the expense of past decades and centuries, so this is something I have to compensate for.
This sort of discovery works best, I think, for publications which are not in a very specific niche of keywords which you already master and you can search for in Google Scholar. For me, it works for publications that depart somehow from my pattern - eiter by nuancing the topic, or going meta to reflect on methods and the philosophy of the enterprise, or taking a diverging theoretical stance which I do not usually tap into, and so on. This gives me some space for serendipity in extending my scope of thinking.
Upvotes: 3 <issue_comment>username_3: I distrust it for an entirely different reason. I once wanted to download a paper, and could only do it if I signed up. I signed up by logging on with my facebook account... Well, academia.edu took my profile information, and, without me knowing it, created an academia profile. With my picture, publications it could find via search engines, and a list of interests that were half right and half ridiculous. I'm a neuroscientist; it listed me as being interested in marketing, among other things. I only discovered this profile about a month later.
Apart from it being entirely unprofessional, I simply do not trust information that is on there, as I have first-hand experience that information on me was wrong.
Upvotes: 7 [selected_answer]<issue_comment>username_4: I had an email from academia.edu asking me to confirm that I am a co-author of a paper ("XY tagged you as a co-author on a paper"). So I asked the other "author" what it was about since I did not recall writing such a paper with him. He had no idea either, Conclusion: something fishy....
Upvotes: 3 <issue_comment>username_5: I also had an email from academia.edu asking me to confirm that I am a co-author of a paper ("XY tagged you as a co-author on a paper"). I was not 'brave' enough to open the email but could see that the topic was poles apart from any research I might ever have done, so will just delete the email.
Upvotes: 3 <issue_comment>username_6: The next issue of the [*Chronicle of Higher Education*](https://chronicle.com/) has a piece on academia.edu ... the blurb for it:
>
> The academy has always been a hothouse of invidious comparison. This website makes it worse.
>
>
>
Upvotes: 3 <issue_comment>username_7: **No. Academia.edu is academic spam.**
The unsolicited emails I received from academia.edu had absolutely nothing to do with my research. They sent me a weekly digest of utterly random papers that I did not recognize, but they seemed to think might interest me. They also sometimes asked me to confirm a coauthor. This was either someone I didn't know, or someone I did know who made the mistake of using their "service" and thereby unwittingly spammed their coauthors. They do host content, which I guess is a legitimate service, but they generally [insist that people sign in](https://academia.stackexchange.com/q/71699/44249) to view it, which makes it more annoying than useful.
I clicked the unsubscribe link in their emails and it asked me to create an account in order to unsubscribe! (Is that even [legal](https://en.wikipedia.org/wiki/CAN-SPAM_Act_of_2003#Unsubscribe_compliance)?) To me, this is a clear sign that they are not operating in good faith. I think they are spammers, plain and simple.
By the way, if you want to unsubscribe from their communications, email the CEO at `<EMAIL>` as well as `<EMAIL>`. That worked for me.
Upvotes: 3
|
2015/05/02
| 1,025
| 4,289
|
<issue_start>username_0: The question is not a arising in my capacity as an academic. I was recently reading a book, and I came across a claim that seemed dubious; so I wanted to find out whether it was accurate. I thought of contacting the author, but then I disovered that the author had been dead for a few years.
My question is, what is the appropriate thing to do in such a case, where the author is the only one who has the information necessary to verify a claim, and yet the author is deceased? Assuming you cared about it enough, would you contact the family of the author, to see if they had notes the author had used in writing the book, or is that impolite? What else can be done?<issue_comment>username_1: Dealing with a claim by a dead author is exactly like dealing with a claim by a live author who isn't answering emails. Or, for that matter, dealing with a claim by an author who is answering emails, but not to your satisfaction.
In science, no single statement is the end of the story. If a statement can be backed up, then it should be backed up by some combination of other scholars and the universe at large. If you can't find sufficient justification in the text, or the sources, or other scholars working in the same area, then it is appropriate for you to treat the statement as an unjustified assertion.
That doesn't mean it's wrong... but it does mean that you shouldn't depend on it to be correct.
Upvotes: 2 <issue_comment>username_2: >
> Assuming you cared about it enough, would you contact the family of the author, to see if they had notes the author had used in writing the book, or is that impolite?
>
>
>
It's implausible that the author's files are well-organized and clear enough that a non-expert family member could easily answer your question. And unless the answer is dreadfully important, it seems inappropriate to ask whether you could fly into town and spend hours/days searching through any files the family possesses. (One can accumulate a large number of boxes of papers over the course of a career.) That could make sense if you were working on a major project about the author and the family was eager to help, but for an isolated question it feels like far too much of an imposition on the family.
If the family felt the files were likely to be of continuing academic or historical interest, then they may have donated them to a university library or archive (most likely at the university the author worked at, if any). I'd start by doing web searches to try to find out. If you can't find out online, you could enquire at the most likely university library.
Before trying to track down the author's files, it's worth convincing yourself that there's no other way to get an answer. For example, maybe the author is alluding to something well known among a certain community, or maybe the author supplied more details in a different publication. You could try ask online (e.g., on a suitable stack exchange site) and see what happens.
If the author worked with a collaborator or had a student who specialized in this topic, then they would be natural people to ask. You could also try asking another expert (e-mailing someone out of the blue will come across best if you give an explanation of how you have tried and failed to find the answer).
If nobody else knows and the author's files weren't formally archived anywhere, then I'm not sure how much more you can do. The author's information may simply be lost and will have to be reconstructed from scratch.
Upvotes: 3 <issue_comment>username_3: There is a very good case of this in Korea. A researcher had written a paper, claiming to have found a 3rd system, the Primo Vascular System, which can carry cancer and other things. They were able to dye it and trace it.
However, he died, and no one else knew what he was doing. His paper also did not explain well enough for anyone to recreate it.
So how do you deal with it? As username_2 points out, unless you consider this a monumental breakthrough, there is little you can do. In South Korea, Universities and Funding agencies agreed with someone (maybe like you), that it is worth millions of dollars to invest in recreating this system.
Reference: <http://www.hindawi.com/journals/ecam/2013/587827/>
Upvotes: 0
|
2015/05/02
| 1,334
| 4,880
|
<issue_start>username_0: I am currently writing a paper on accessibility and I am talking about some companies in my report. How do I write a company name, product or otherwise when the word is not spelled correctly?
For example:
>
> I went to ToysR’us at the weekend.
>
> I visited the Change4Life website.
>
>
>
Is it acceptable to just write it as normal, or should you put the word in italics or quotation marks?<issue_comment>username_1: Usually, the journal’s copy editor will have an opinion on this. My personal guideline is to adapt the spelling as much to a regular one as possible without diminuishing the identifiability of the name.
Thus, *Toys R Us* stays like this (because, at least I would have to think am moment about who *Toys Are Us* is), and *Change4Life* becomes *Change 4 Life* ore stays as it is. However, *BIG STUFF INC* becomes *Big Stuff Inc.,* and *etechnicks* becomes *Etechnicks* or possibly *eTechnicks,* and *Ovəя!†he!†0p!!!* becomes *Over The Top.* In particular, there should be a capital letter at the beginning of the word or at least near it and only there (unless it is a real abbrevation), so it can be easily identified as a proper name, hence this is the function of capitalisation in the English (and almost every other) orthography.
For further reading, I recommended: [Editors’ enemies](http://www.theslot.com/caps.html) and [‘But FUNKY!!!web!!!DUDES.com is their trademark!’](http://www.theslot.com/webnames.html).
As for italics and quotation marks, I would not treat such names different from others. If you follow the above rules, they should be identifiable as proper names through capitalisation, which should be sufficient.¹ If the journal’s style reqires you to italicise company names or equip them with quotation marks, that’s another story.
---
¹ I only italicised the names in the above, because I was talking about the names, not the companies.
Upvotes: 2 <issue_comment>username_2: I challenge your contention that these things are spelled incorrectly. When you write 'Toys "R" Us' you are *correctly* spelling a proper noun. Names are signifiers, and the entity who controls the name controls how it is correctly spelled. I would not have told my high school friend whose last name was "Tomson" that his name was spelled wrong just because for most people it was spelled "Thompson." Likewise, 'Toys "R" Us' has chosen a particular spelling for its title, and that's the correct name.
So now comes the question of how to communicate such titles, and here, I see four basic cases:
1. Just use it as written, and count on people to understand: not great with Toys"R"Us since quotes are semantically loaded, but fine for a well-understood "misspelling" like [Google](http://en.wikipedia.org/wiki/Googol).
2. Often, there are commonly used variants that are simpler. If you say Toys R Us, people will know what you are talking about even without the quotes, so its OK. Same with [the artist formerly known as Prince](http://en.wikipedia.org/wiki/Prince_%28musician%29#1991.E2.80.9394:_The_New_Power_Generation.2C_Diamonds_and_Pearls_and_name_change). If you say Toys Are Us, however, you've gone a step too far and "corrected" the name into something incorrect.
3. Put it in quotes, like I did in the first paragraph: 'Toys"R"Us' (here I made the unusual choice of single quotes because of the presence of double quotes in the name).
4. If you think it is still strange enough that you think people will think you misspelled it, you can add [sic] afterwards, as in "Tomson [sic]"
I don't like italics, as a solution, personally, since I generally see italics used for emphasis or for definition, and neither is the case here.
Upvotes: 4 [selected_answer]<issue_comment>username_3: For published work, this is for the journal to decide. For your own writing, [Wikipedia's guidelines](https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Trademarks) seem sensible. They distinguish the name of the company from the company's preferred orthography.
A company, just like a person, is free to choose the spelling of its name. If I want to call myself Dayvydd (I don't), I can do: you're welcome to tell me that that spelling is unusual but, if I choose it as my name, it is my name and it is correct by definition.
A company is also free to brand itself using unusual typography: for example, unusual letter cases, miscellanous symbols and so on. This is a sort of watered-down logo: you wouldn't include a full-blown logo in running text and there's no particular reason to use one of these almost-logos. So, use ordinary capitalization and replace any weird unpronounced symbols with the normal one ("Macy's", not "macy\*s"; "Seven", not "se7en") but don't replace pronounced symbols with words ("Phones4U", not "Phones For You"; "Toys R Us" but not "Toys [backwards-R] Us" or "Toys Are Us").
Upvotes: 3
|
2015/05/02
| 822
| 3,426
|
<issue_start>username_0: Many scientific journals accept only black & white articles.
Does this mean that only monochrome (black and white only) articles or also grayscale (gray is OK) article are OK?
The same question holds for books.<issue_comment>username_1: I have a confession to make. I absolutely ignore it when a publication tells me they want grey-scale or black and white.
Yes, they want it because somebody is going to put something on some dead trees, they're going to be cheap about it and not use color, and somebody might actually pick up a dead tree and see my mangled image. But either they're also going to put it in a PDF, or else I'm going to put a pre-print in a PDF and that PDF is going to go online in all its radiant rainbow glory, and *that* is what people will actually read.
So [CENSORED] monochrome. That is so 20th century.
Upvotes: 5 <issue_comment>username_2: In general some degree of half-toning will be used, meaning that a limited set of grey scales can be output in the printed journal, so figures supplied in greyscale should come out in greyscale. Figures supplied in colour may (i) upset the editorial staff or software (extra hassle for you); (ii) be chargeable (to avoid this you may have to resubmit figures, extra hassle); (iii) reduce to greyscale rather inconsistently with what you expect. So if you submit colour figures (and I generally do), you may as well go for good clear distinctions between data sets, and print B&W yourself to check.
It is very possible (and in some cases required) to produce most figures in a way which either doesn't hinder the B&W reader too much, while still aiding the reader who works in colour. Two examples:
* lines on a graph can be dashed etc. as well as coloured, with colours chosen to render different shades of grey.
* colour maps can easily be chosen to be continuous and [monotonic](https://en.wikipedia.org/wiki/Monotonic_function) -- though this isn't usually the default.
Both of these approaches are a small step towards helping colour-blind readers as well.
You can't assume your readers will work on screen -- an interesting paper may want to be annotated (I've not yet found a pdf-markup solution that comes close to pen&paper for this). Paper copies are also easier on public transport unless you have a very good and large tablet. It's also not uncommon for B&W printing to be easier (less far to go the the printer) and much cheaper (or uncounted and essentially free) than colour.
Much of this was going to be a comment presenting the opposite point of view to @username_1, but another couple of sentences made it an answer.
Upvotes: 5 <issue_comment>username_3: First, off: I am not sure that many only accept monochrome or B/W (usually implying grey-scale) figures since trends quite some time are for digital publication only (where colour or B/W, technically, is an irrelevant question).
The question about true Black and White versus grey scale is a matter of technique. In the analog days of printing grey-scale images had to be rasterized and this may have been a problem for some reproducing grey-scale illustrations. Today rasterization is done digitally within the printing process and should not pose any problem. It is therefore relatively safe to say that providing grey-scale illustrations will be fine. As I stated above B/W includes grey-scale nowadays and since quite a long time back.
Upvotes: 0
|
2015/05/02
| 1,869
| 7,373
|
<issue_start>username_0: Okay, first off let me provide an overview of my educational past:
Just as any kid I went through school, and then high school with an integrated prep course for getting into a top notch college in my country. I should admit, I **loved** my subjects (physics, chemistry, math and biology). The faculty was great. I showed a lot of enthusiasm and my then teacher appreciated it.
However, naturally at that age I had many obstacles: Peer pressure, very few friends, was bullied, and I was pretty desperate to get into the social atmosphere (The cool group, the small talk, back stabs, etc.)
So, that bad habit stuck on to me (that is, always trying to pretend, so as to be accepted socially). And it has got in the way of my personal life, and behaviorally I feel like I'm really messed up in some way.
I got into the 7th best college in my country. I did not take a bridge year. I was pretty confident about getting into the Electronics and Communications branch, and I did.
I've learned a lot in college, met people with a varied persona, read a lot, and also learned that there can be a wide set of career paths one can choose nowadays.
I have varied interests that I'm good at: Art painting, murals, sketching, fashion design, stitching, singing, acting, impersonating famous scenes from movies, or people in real life (like some of my very comical faculties).
I'm now completing my 3rd year in college and I don't like it here, or at least in this branch. That said, I do like some subjects like DSP, Embedded system design, mechatronics, but I can't imagine this as my long term career! Everybody around me is talking about and praising this industry: "Oh, the amazing technology", "This wonderful company", "the internship opportunities", etc. I'm so confused, I'm just slipping into something I don't like! And I have a burning desire to try my hand at something in the arts (acting, theatre, and fashion design).
I've hinted about my displeasure with the current state of things to my parents, but I have never told them anything about my "plan" for life.
My parents are overprotective of me. They feel the need to know everything about me. I am pretty open with them about most thing, but it gets annoying when I see my father's hidden dissatisfaction on my refusal to his suggestion about going to the US for MS or for MBA. In addition, when I spend my free time sketching, stitching, or graphic designing, an hour later, artfully, my parents would ask me something like: "Don't you have assignments to do", "don't you have exams in a week", or "how do your friends get 9 point grades?". Sometimes my college faculties say stuff like, "You're so good at this subject, why are your grades just mediocre?" When it comes to my career, it's up to me. I don't like anybody to push me into their version of "secure", "high-paying", but non-adventurous, and routine career.
All hell breaks loose in the house when there is a discussion about "what do you want to do in life?"
Honestly, my parents are not very happy at the thought of me going into the film industry or fashion, and I don't stand up for myself, afraid that I might fail if I go down that path, afraid in the end they were right and I would be the family example of a "ruined career" girl in the relative's gossips.
But this thing has been on my mind for too long, my grades are going up and down. I decided that I would write my GRE, go to the states, join some college. But in the meantime, experiment with my passion, and then when things are right, drop out. There would be no worry about satisfying the expectations of my parents there. I would be independent!
But then, what is the practicality of all this?
Would it be too late, keeping in mind the current state of cut-throat competition?
Would it affect my career, whatever it may be, if I take a gap year or a 6 month gap to travel alone and discover myself and my passions after college?
I've asked this question to a few people, even the college counselor, but everybody says that's a crazy idea!
What should I do ?<issue_comment>username_1: Most people who are trying for theater or fashion careers need a "day job" that pays the rent and buys the food unless and until their preferred career takes off. For a lot of them, it is a minimum wage job. You can make it much better paid than that if you play your cards right.
I suggest concentrating on your studies until you have your degree. Look for a day job in your degree field, but in whatever city you can reach that has the best opportunities for the alternative careers you are interested in. Look for a part time film or fashion job that you can work evenings and/or weekends.
Take advantage of having a more highly paid day job to save as much money as you possibly can from the combined income. As your career advances, you may want to do things like moving to a different city without having a new day job already lined up. Having a year or so of minimal expenses in the bank would make that easier.
Upvotes: 4 <issue_comment>username_2: >
> Nobody ever figures out what life is all about, and it doesn't matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough. Work as hard and as much as you want to on the things you like to do the best. Don't think about what you want to be, but what you want to do. Keep up some kind of a minimum with other things so that society doesn't stop you from doing anything at all.
>
>
>
* <NAME>.
I have personally used and found the last part about "minimum required" very true.
It is natural for your parents to worry over you and desire a good life for you - one better than they lived. And it is also natural for you to worry *about their worrying about you* since you, like any other individual, aspire to live life on your own terms.
There can be a win-win solution like username_1 suggested. And I would advise you to try it out. If you join a good university or even a company you might get a bit of chance around to experiment with stand up comedy like Dian<NAME>(Check her out on YouTube!) does while having a "day job". This can be applied varyingly in other fields that you wish to explore once you find the right cultural scene at your place.
Now, regarding your fears about your parents trying to "control" you through a relative or some other such devices, you have to decide to draw a line and speak up for yourself and let them know what you like or don't like and what your dreams are. They might worry overtly for a day or two but it is better than "betraying" them later. Sit down and tell them that you wish to explore your other interests. If they oppose tooth and nail, do it anyways! Covertly, if required.
I had a bit of similar situation in my life and I explained it as "If an eagle doesn't let the eaglets fly out of it's nest, they would never learn to fly. It is better that I disappoint you right now rather than blaming you for everything that goes wrong with my life just because I couldn't put my heart into what you advised for my career and life."
But I am an Indian inspired by a movie called 3 Idiots (especially the photographer guy), so the cultural disclaimers apply :)
**PS**: You should give this a read. ["How to do what you love" by <NAME>](http://www.paulgraham.com/love.html)
Upvotes: 3
|
2015/05/02
| 1,295
| 5,882
|
<issue_start>username_0: Can I be admitted to graduate school in a different field from my degree? Specific cases include:
1. If I've taken plenty of advanced courses in field X in the process of completing a degree in another field, can I apply to graduate school in X?
2. What if I haven't taken many courses in X, but I have acquired a good grasp of X through self-study or working in a related field?
3. What if I've never studied X, but I have done very well in an unrelated field? Could I be admitted to graduate school in X on the basis of general intellectual promise, and then make up the missing background after enrollment?
*Note that this question is an attempt to provide a comprehensive answer, to avoid the need for a profusion of field-specific questions on this topic (see the associated [meta question](https://academia.meta.stackexchange.com/questions/1725/can-i-apply-to-field-x-with-an-undergraduate-degree-in-y-type-questions)). Please feel free to edit the question or answer to improve them.*<issue_comment>username_1: Graduate programs care far more about your background and preparation than about which field is listed on your degree. Even if the application requirements list a degree in X as a prerequisite, the department will very likely make an exception if you have a degree in another field but can demonstrate that your background is equivalent. How likely this is depends on which of the three cases listed above you are in:
1. Extensive formal study of X puts you in good shape. You should discuss this issue in your statement of purpose and make sure your letters of recommendation address it. Letters from people in field X who say your background is appropriate would be more convincing than letters from those in other fields.
2. The overall strategy is the same as in the previous case, but you'll have to work harder to make a compelling argument for admission. It's certainly possible to get admitted, but the rest of your application will have to be convincing enough to make up for not having courses in X on your transcript. When you request letters of recommendation from faculty in field X, you should specifically ask whether they are prepared to endorse your background as sufficient for admission (to avoid getting letters along the lines of "this applicant is smart and hard working, but I don't know much about their background and preparation"), and you should strategize with them about how you can present your background in the best possible light.
3. It's unlikely that you can be admitted at this point, for two reasons. One is that many people think they would like to study a field that's new to them, only to discover that it's more difficult or less interesting than they had expected. The other reason is that time in graduate school is a limited resource, and it's inefficient to use it to study prerequisites. You would generally be allowed to fill in a few gaps in your background, but not to begin studying the field from scratch. Instead, if you want to change fields you can begin by taking individual courses at a local university. It's also sometimes possible to enroll in post-baccalaureate programs aimed at helping people change fields (but the availability of such programs varies, depending on the field and location).
There is a notable exception to #3, and that is the handful of fields where there *is* no meaningful pre-graduate coursework in those fields. For example, there are vanishingly few undergraduate programs in epidemiology, and as such there is very little expectation that you have taken specific coursework, and nearly the entire admitted graduate class will be "switchers" of some form or another. In these cases, what is likely most helpful is to be able to articulate how your present program and course of study has led you to be interested in, and prepared for, further coursework in that field.
Upvotes: 5 <issue_comment>username_2: I would like to add to the other answers the fact that the ease with which you can jump to a different field in graduate school depends on the nature of your new field. One factor to consider is how interdisciplinary your new field is. Some traditional fields, such as mathematics, physics, and to a lesser extent chemistry, have a fairly strict hierarchy in that you must take certain courses in certain order so as to understand the field. Thus it would be more difficult to convince the admission committee that you are a good fit if you have very little formal training in the field. On the other hand, fields such as biophysics, neuroscience, etc., are very interdisciplinary. Neuroscience programs, for example, will usually be happy to admit majors from math, physics, chemistry, biology, psychology and computer science, to just list a few. In this case, even if your official major is not neuroscience, your chance is not significantly worse than someone who does have a major in neuroscience (this of course also depends on your other credentials such as relevant research experience).
Another factor to consider is the structure of the Ph.D. programs in your country and in your new field. In some cases, such as biology or many European countries, you are required to select an advisor from the very beginning of the program, whereas other programs, such as in mathematics in the US, you do not have such requirements and you are admitted into the program first and select your advisor only one or two years later. In fields that require you to pick an advisor from the beginning, having a different undergrad major may be less disadvantageous if, for example, you personally know (or your advisor personally knows) the professor that you will want to work with. Alternatively, you may be better off applying to programs that do not require a specific advisor if you do not have a specific commitment in the new field.
Upvotes: 4
|
2015/05/02
| 932
| 4,298
|
<issue_start>username_0: I am writing my thesis, and during my work, my advisor supplied me with some confidential documents that he still has from his previous work. Part of my work is based on these documents and I am unable to cite them because they are confidential. My advisor says that I can use them without citation because they are not available to public, but I am afraid using them without citation might be considered plagiarism and may come to hunt me me in the future.
What to do in this case ?<issue_comment>username_1: What to do in this case is **EXTREMELY** dependent on the exact nature of the documents involved, and particularly on who declared them confidential and why. You need to talk about this in detail with your advisor and understand the situation extremely carefully, because otherwise **citation may be the least of your concerns.**
Let me outline a few of the scenarios that you might be facing here:
* Your advisor may have supplied you with classified information, making you complicit in a serious crime and subject to high fines and a possible long prison term.
* The information might controlled under other legal regulations, such as privacy legislation, insider trading, international arms control. These are far-reaching laws that can interact with lots of bits of science you might not think relate. Once again, potential criminal prosecution unless the information is carefully managed / abstracted / de-identified in a way acceptable to the agencies that enforce these regulations.
* The information might be related to patents, trade secrets, or other IP not yet fully disclosed, in which case you might end up on the receiving end of a civil lawsuit unless appropriate permissions are obtained.
* The information might be related to something that another group is planning to publish on first, and it has been disclosed to your advisor with the understanding that they get to publish first and not be scooped. In this case, you need to coordinate publications schedules.
Many other situations might apply as well. In some of these situations, some form of citation or acknowledgement is appropriate. In others, the information must be carefully "fuzzed" and citation is inappropriate or even illegal. The details cannot be guessed without a thorough understanding of the exact nature of the information and how you are using it.
Bottom line: go have a long heart-to-heart with your advisor and work out exactly what is the nature of the confidential information and what is the appropriate way in which traces of it should be handled in your thesis.
Upvotes: 5 <issue_comment>username_2: You cannot use ideas, data, or wording without indicating their source. If they are based on a document you are not allowed to cite, then you have a major problem. (Note that you can cite documents that are not available to the public. That's certainly problematic, because other people can't verify or use the source, but it's different from not citing anything.)
Theoretically, you could attribute ideas or quotes to an anonymous source if the source approves but asks not to be named. That would be exceedingly unconventional, and I don't think I've ever seen it done in an academic paper, but it would at least be intellectually honest and avoid giving the impression that the ideas were yours. The same is true in principle for data, although it's not clear why readers would trust data from an anonymous source.
What worries me about your question is that you say your advisor still has confidential documents from his previous work. Maybe I'm reading too much into your description, but it sounds like the reason your advisor will not let you cite them is that he is not allowed to share them with you (and he may not even be allowed to possess or use them himself). If that's the case, then the whole project sounds unethical. It might be justified in some rare circumstances, for example involving whistleblowers or similar leaks, but outside of these cases, you cannot use confidential documents without explicit permission.
In any case, there is certainly no principle that says you can use documents without citation if they are not available to the public. Either you have misunderstood your advisor or he is wrong on this point.
Upvotes: 4
|
2015/05/02
| 450
| 1,874
|
<issue_start>username_0: I recently presented a poster at an undergraduate research conference, and they are now soliciting write-ups for the conference proceedings. I was surprised at their [formatting guidelines](http://urp.unca.edu/sites/urp.unca.edu/files/NCUR_Paper_Example.pdf), specifically the requirement that all sub-headings be lower case. Ex.:
```
1. Main Heading
1.1 this is a secondary heading
1.1.2 this is a tertiary heading
```
I have never seen this convention. I've always thought the preferred style for sub-headings is to either use the same capitalization format as for main headings, or capitalize the first letter of the first word.
I find this all-lower case format to be unprofessional-looking, and was wondering: is this a common convention, and if so, what is the underlying rationale?<issue_comment>username_1: I have never in my academic life seen this convention: I have always seen lower-level sub-headings follow exactly the same convention as higher-level headings.
Do what they tell you to do (the world has lots of strange Official Requirements for Our Unique And Special Snowflake Publication Venue), but I wouldn't worry about it for anywhere else in life.
Upvotes: 2 <issue_comment>username_2: The basic rule for all publications is to follow the guide-lines to the point. In this case it seems that the conference is expecting what used to be called "camera-ready" manuscripts, i.e. manuscripts that are "published" as is and not going through additional formatting by the conference organisation. With journals, the guidelines usually concern the formatting of a manuscript which later undergoes additional formatting before being published.
In any case, just follow the guidelines to the point. You would be surprised by how many fail to do so and you may even end up putting a smile on the recipients face.
Upvotes: 0
|
2015/05/02
| 650
| 2,837
|
<issue_start>username_0: Suppose someone publishes a survey paper in some journal on a topic where a lot of work is still going on. A lot of new interesting works are expected to be published every year on this topic. Is it a good idea to upload a version of the survey on Arxiv and keep on updating it? In that case, should the Arxiv version contain a note that an earlier version was published in so-and-so journal?<issue_comment>username_1: I'd say yes, the paper would be valuable for the community, and yes (it must cite the published version).
It would be a good idea to include some kind of version information in the title or on the title page, so that people can cite the version of your survey they use. This could be a simple as adding the subtitle "Version of May 2015" and change that whenever you produce a new version.
Upvotes: 3 <issue_comment>username_2: >
> Is it a good idea to upload a version of the survey on Arxiv and keep on updating it?
>
>
>
It's worth doing if you have the time and energy, but it's unconventional. There are some continuously updated survey papers (such as the [dynamic surveys](http://www.combinatorics.org/issue/view/Surveys) in the Electronic Journal of Combinatorics), but the usual expectation is that a survey represents a snapshot in time. As a career matter, writing a second survey on recent results a few years later may get you more attention and credit than updating your original survey. Continuous updates can be especially useful if they are timely and the field is particularly hot, but they have their own limitations in a rapidly developing field (the original organization may weigh you down if you stick too closely to it).
From my perspective, the trickiest aspect is what to tell people. If you silently update the paper, it won't attract as many readers as announcing that everyone should check back for periodic updates. (And the whole purpose is to inform the community, so getting readers is important.) On the other hand, it's difficult to commit to regular updates, and it's nice to avoid issues such as whether other authors should be unhappy that their latest results haven't made it into the survey yet.
>
> In that case, should the Arxiv version contain a note that an earlier version was published in so-and-so journal?
>
>
>
If you update the arXiv version of a paper after publication, it's critically important to be clear about how it relates with the published version. Otherwise you risk confusing and upsetting your readers.
I would recommend against removing anything from the survey over time. It would still be accessible via past versions on the arXiv for those who know to look there, but it's annoying to send a student to learn about X in Arani's survey and have them report back that it's not covered.
Upvotes: 5 [selected_answer]
|
2015/05/02
| 912
| 3,780
|
<issue_start>username_0: Please pardon my somewhat peculiar case, I hope this is still a question useful to others.
After school I became an undergrad but paused studying after a few semesters of university to work full time. During the several years I was employed, I also became co-author of three papers, two of which were were with a ["potential, possible, or probable predatory scholarly open-access publisher"](http://scholarlyoa.com/publishers/).
At the time I was not aware journals like that even existed, and as a professor and two doctors were also authors, I would never have imagined this could be an issue. As I am now a university student again and learned about this, I obviously regret my decision to participate. I am not sure how my mistake will be judged by others, and how far-reaching the consequences are, this is why I am asking here.
To be honest, I think it, while bad, was no terrible blunder, as I was "only" an employee at the time (though an undergrad before that). Also, I suspect most future employers would probably not even recognize my mistake.
Clearly it is different if I were to pursue an academic career. Are past flaws of this kind forgivable - and if so, should I (actively) point out that I made a mistake and will avoid it in the future?
Annotation: Many people claimed (elsewhere) that the journal in question did no peer-review. This was not the case for me, as we always had a list of (content) issues to address, usually by 4 to 6 reviewers.<issue_comment>username_1: While publishing in a predatory journal doesn't help your CV, I don't see this as anything near a fatal mistake even if you are pursuing a career in academia. The main cost is just the wasted time and effort and the fact that you could have gotten credit for these papers if published elsewhere.
Given that you were at such an early career stage when you wrote these papers, it seems that you can readily be forgiven for falling victim to a predator publisher. Tricking people in this way is, after all, often a component of their business model. When I see a professor publishing repeatedly in these journals, that suggests he or she is trying to pull something over on someone. When I see an undergrad or someone straight out of undergrad doing so, I feel badly for the student and angry at the publisher.
I would simply list these papers on your CV and move on. They're part of your publication record, so you can't really omit them in good faith -- but nor do you need to beat yourself up over it. If someone asks, be forthcoming both about the fact that this journal is questionable and about the way that you ended up publishing there.
You haven't done anything terrible. You've been exploited, and maybe if someone is a harsh judge he will think this makes you look a little silly. But in your situation I don't think most people would hold that against you.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Almost always you sign the rights over to the journal when you publish. Now, imagine you are a phD student, publishing his or her's graduate life to this one journal. You find out it's a scam, and now your work is published in a scam journal and you have no rights to your own work anymore. This is where the big problem with those scam journals come from. Not that they were published per say, but that you now have no legal claim to your own research.
It doesn't sound like that's your case. As others have said, list them on your CV, and that's that. If anyone asks, you can tell them what you told us. It certainly isn't a stigma or a black hole of academia. Though, I would highly advise you in the future to be more cautious and to only work with those in academia/ research who are more knowledgeable on this topic.
Upvotes: 2
|
2015/05/03
| 1,399
| 5,930
|
<issue_start>username_0: Let's say you are at the pub and random layperson asks you the question: "So I heard you did a PhD. What was it on?"
What do you do if your PhD was in pure maths. What techniques can you use that will convey to them what you did in your research, and that it was important?
What sort of techniques do people use in the [three minute thesis competition?](http://threeminutethesis.org/)<issue_comment>username_1: If you can give them a semi-real-world example of the kind of problem that your work will help solve, that's probably as much as most of them really want. If you're far enough out into the abstract mathematical philosophy that you really can't do that, see if you can come up with a brief description of what question (or kind of question) you're trying to answer. It doesn't have to be complete; most folks just want to have some general sense of what area you're working in... just as you would probably be satisfied to know that I'm working on IBM's new generation of server software without wanting all the gory details.
They can always ask for more detail if they have the interest and background to appreciate it.
Upvotes: 2 <issue_comment>username_2: You could say that you got your PhD in the art of thinking. You used logic, and you put together existing ideas in new ways. If the person asks, Oh, was it in philosophy? You could say, not really, it was more mathematical. If that's a conversation stopper... then you'll be glad you didn't waste your time saying anything meaningful. If the person lights up when you say "mathematical," then you know you can have a satisfying conversation.
Or maybe you could say that you were working with novel mathematical tools for analyzing certain types of problems, and that you were concerned with establishing theoretical underpinnings.
Upvotes: 0 <issue_comment>username_3: This questions will vary greatly depending on your research area. Many people in applied fields will have no more issue than any other scientist, while those working in, say, algebraic geometry tend to have a tougher road. That said, here are a few principles I have found helpful:
1) Find the **smallest question** that captures the **key idea**. You don't have to explain your particular problem, so much as give a flavor of the sort of question you work on. For example, in Schubert calculus, everyone always leads with the question "Here are four lines in space. How many lines intersect all four of them?", even for other mathematicians! You can then allude to higher dimensions, broadly say how your work ties in (or just assert that it does!), and so on.
2) **Return to the specific**. Use vivid metaphors as a way to overcome abstraction. For example, I often describe a permutation as a row of line dances, and a simple transposition as two people dosey-do-ing to swap places. Now I can talk about the dancer's motivation as motivating my questions, and it somehow seems less arbitrary. Providing something tangible to visualize helps a great deal. This is completely at odds with the usual mode of mathematical communication, where we wish to abstract away every specificity.
3) **Inject narrative**. Talk about the history. Mention peoples names, and say what they did. Give a scope of the human endeavor that is (a) mathematics, (b) your field and (c) your specific problem. "Littlewood and Richardson came up with a rule for multiplying these polynomials (Schur functions) in the 1930's, and said the proof follows from 'simple combinatorics', which they thought beneath them to do. Forty years later, Schutzenberger finally figured out how to do the simple stuff, after many failed attempts by famous mathematicians, some of which were published!" It would here be appropriate to mention some details about Schutzenberger's fascinating life.
Edit: Explain why **you** care! Talk about how you came to the problem, your motivations (beyond glory) for solving it, why you think it's worthwhile.
4) Create opportunities for **dialogue**. Obviously this only applies to someone who is genuinely curious, as opposed to being polite. If you provide the over-arching perspective (1) and provide a specific and familiar framework (2), your listener will have a framework they can use to start asking questions. Metaphors will be abused, and their limits exposed, but that's okay. At the end of it all, someone might call you a "math detective".
5) Be willing to **sacrifice** a little (or a lot) of **accuracy**. It's okay to describe the overall thrust of your area, rather than the particular question that you work on. It's okay to give a wishy-washy description of something that glosses over many complications. Despite working in the field that most prizes accuracy, mathematicians regularly gloss over subtle issues with each other. The standard should be **much** lower when dealing with non-mathematicians. Seriously, it's okay!
6) **Steal shamelessly** from others in your field. Read popular accounts of your area, or ask other people how they try to describe things. If you work in the Langlands program, read <NAME>'s book and see how he tackles this challenge. Look at the "What is a..." series in Notices of the AMS. In general, people seem averse to doing this across academia, but everyone benefits if you can use the best exposition, regardless of whether or not it is original to you.
Upvotes: 4 <issue_comment>username_4: I don't think my comment directly answers the question, but voting so far seems to say it's important enough it's probably worth saving:
>
> In my experience, people mostly believe there's no way to do maths research, because it's all already done. They also think we spend our time on either very big numbers or very long equations. I therefore think that anything that helps them understand the nature of mathematics is worthwhile, even if it doesn't directly answer the question.
>
>
>
Upvotes: 3
|
2015/05/03
| 858
| 3,693
|
<issue_start>username_0: I have recently defended my Ph.D. dissertation and submitted it to my graduate school's **e-repository** of scholarly works as well as to the **ProQuest** database. I have not heard yet from ProQuest about the respective publication, but my work appeared blazingly fast in my school's repository, which is great, as I already can cite it in my CV and elsewhere, where appropriate.
As I understand the terminology in the area, dissertation or thesis, submitted to ProQuest (or another scholarly database, for that matter) is referred to as *published*. On the other hand, the same document, submitted to university's e-repository or similar archive, is referred to as *unpublished*. Also, while I expect the ProQuest to assign a DOI to my work, my university's e-repository doesn't seem to include this step. Considering all the above-mentioned information, I am curious about the following:
* What is the optimal strategy for maintaining and citing both **unpublished** and **published** versions of my dissertation? Since the published (ProQuest) version will not be available to people, lacking access to ProQuest, does it make sense to maintain and cite both versions, so that other interested people will be able to access and cite the unpublished version?
* What is the optimal strategy for **assigning DOI** to my dissertation report (either to both versions, or to the unpublished, if ProQuest will assign DOI to the published one)?<issue_comment>username_1: You should avoid citing it twice, because it's the same work (think of it as a book published by two different publishers - you wouldn't cite both versions). Whether you prefer to cite it with a ProQuest DOI or with a link to the university repository is up to you, though possibly a given journal may have an opinion on which they prefer. Depending on how formal the citation style is, you could do something like:
* <NAME> (2015). ***ProQuest citation; DOI***. Copy available from [repository]
which would let you use both access methods.
For DOIs, it's unlikely that your repository will assign a DOI to their version - most repositories aren't set up to issue DOIs. The repository is intended as an alternative way to access it rather than a different publication, and so material hosted by a repository tends to give the bibliographic details of the "real version" rather than provide their own.
(Also, I'm not sure what you mean by "maintaining" - are you envisaging updating it over time? This would be quite unusual for a doctoral thesis...)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Regarding strategy for assigning a DOI, my personal preference is to use my university library's repository for that. I prefer to use my own university's repository as the canonical source of bibliographic information about my publications to remain maximally in control of it. My university library happens to be able to assign DOIs. I realise that not all repositories do this and in that case, I would recommend using a repository such as [figshare](http://figshare.com) or [Zenodo](https://zenodo.org) because they provide open access to the published material and you retain your copyright. It seems ProQuest offers the latter but not the former.
A special case where one has to be careful is if you do not own the copyright to all of your thesis. This was the case for my own thesis which consists partly of papers for which Springer and IEEE own the copyright. They allowed publication on my university's own repository but I could only post the introduction on figshare, because that implied [CC-BY](https://en.m.wikipedia.org/wiki/Creative_Commons_license) licensing.
Upvotes: 1
|
2015/05/03
| 1,555
| 6,637
|
<issue_start>username_0: Dear Academic SE advisers,
I am a college sophomore in US with double majors in mathematics and microbiology. My algorithmic biology research got me passionate about the number theory and analysis, and I have been pursuing a mathematics major starting on this Spring semester. I have been independently self-studying the number theory textbooks written by Niven/Zuckerman/Montgomery, Apostol, and Ireland/Rosen on this semester. As this semester progressed, I discovered that I am more interested in the pure mathematics than applied aspects (computational biology, cryptography, etc.). I want to pursue a career as analytic number theorist and prove the Collatz conjecture and Erdos-Straus conjecture.
I have been thinking about doing the number-theory research on my university (research university; huge mathematics department). I have been self-studying the NT by myself and also regularly attending the professional and graduate seminars on number theory but I did not do any pure mathematics research as an undergraduate. Should I visit NT professors in my university and ask them about if I can do undergraduate research under them? If research is not possible (perhaps due to my lacking maturity), should I request of doing independent reading under them and later proceed with the research? How should I ask them? What should I address? If even independent reading is not desirable to them, what should I ask to them or do in my own?
As for my mathematical background, I have been taking Calculus II (computational) and discrete mathematics. I will be taking calculus III (vector calc.) on Summer, followed by Analysis I, Probability, Theoretical Linear Algebra on Fall 2015. As for my self-studying on this semester, I have been studying NT textbooks (mentioned above), proof methodologies, and basic linear algebra.
Thank you very much for your time, and I look forward to your advice!
Sincerely,
PK<issue_comment>username_1: While it is possible for a highly competent mathematician to dole out a doable problem for an undergraduate to solve over the summer, empirical data has proven otherwise - i.e., it's rather hard for an undergrad to prove anything original in number theory if only given a few weeks during the summer.
For an REU in NT, it might be more realistic to expect to read some interesting topics in number theory or perform some numerical analysis. Take it as a bonus if you obtain any original result.
Number theory is known to be a very difficult topic to get into. Before you decide to commit, take courses in complex analysis and abstract algebra. They are crucial if you want to read more advanced texts in number theory.
Good luck!
Upvotes: 3 <issue_comment>username_2: >
> Should I visit NT professors in my university and ask them about if I can do undergraduate research under them? If research is not possible (perhaps due to my lacking maturity), should I request of doing independent reading under them and later proceed with the research? How should I ask them? What should I address? If even independent reading is not desirable to them, what should I ask to them or do in my own?
>
>
>
Don't be shy. Go talk to some professors! You can start the conversation by telling them what books you've been working with, and ask for some academic advising. This means the professor will go over your plan (which courses to take, when, and in what order) with you. The professor will likely confirm the wisdom of the tentative plan you came up with -- and then the conversation will blossom from there. A professor may make some specific suggestions for coursework and/or independent study. But maybe you should wait until you've got more coursework under your belt before proposing a research project. (Still, someone may surprise you!)
You can start with a short email saying that you have fallen in love with number theory, and would like to make an appointment for some academic advising.
It is always a pleasure when two people who love the same thing get together for a chat. You have nothing to fear.
Upvotes: 2 <issue_comment>username_3: This depends a bit on where you are. You mention that you are at a large research university, presumably with a large graduate program - in these places I expect that it is pretty rare for an undergraduate to do research directly with a faculty member unless they are exceptionally talented. Faculty members have their own research programs and their own graduate students to direct, and coming up with interesting yet tractable problems is hard! However, it can't hurt to ask. Since you've been regularly attending seminars, you should know who the regular attendees are - send an email to someone to set up a meeting to chat (or drop by an open office hour), tell them what you've told us, and ask if they have any suggestions for what you should do next. Sending the email first gives them the chance to think about your meeting ahead of time, and if they're not interested they could just send an email back saying so. As for what exactly to ask, I would recommend just asking for suggestions and see what they come up with. If they don't seem inclined to do so, asking for suggestions on what to read/which courses to take is a good plan.
In a smaller liberal arts place, perhaps surprisingly, there are more opportunities for undergraduate research, and just generally more opportunities for interacting with faculty members.
However, there are other opportunities to get into research other than working directly with faculty. In a large research university, there are probably lots of graduate students and postdocs (who probably also attend the same seminars as you). They, particularly if they are interested in a more teaching-focused direction, might be willing to talk to you on some regular basis about your readings. (I recently heard of the [University of Chicago Directed Reading program](http://www.math.uchicago.edu/~may/VIGRE/), which sounds pretty cool.)
Lastly, I know that you mentioned you wanted to do research at your own institution, but I'd like to ask you to reconsider. There are several summer research opportunities in mathematics, and they are a fantastic opportunity not only to learn math and get the experience of tackling a problem on your own (or in a small group), it's also good to just meet other people at your stage in life with similar interests, as well as mathematicians from other universities. If you're curious, here is [a list of summer REUs](http://www.ams.org/programs/students/emp-reu) (REU stands for Research Experiences for Undergraduates).
Upvotes: 3
|
2015/05/03
| 601
| 2,606
|
<issue_start>username_0: I am about to submit a paper to a respected math journal. In this paper I solved a conjecture that was raised by a member of the editorial board (and this is one of the main reasons I chose this journal).
When submitting, I need to choose a member of the editorial board to handle my submission. Given his relation to this paper, is it OK if I list this person as the requested handling editor? Or does this cause some problem?<issue_comment>username_1: Yes, it is OK. There's certainly no conflict from my external perspective, and if they decide that there is a conflict, the person in question can either decline the assignment or otherwise pass it off to a different editor at the journal. No one is going to get you in trouble for requesting this editor.
Upvotes: 4 <issue_comment>username_2: In fact, I think that is the best choice! That guy is likely to know more about the subject, and be aware of previous attempted solutions.
Upvotes: 6 <issue_comment>username_3: From an economical point of view, chosing this editor seems pretty reasonable. He is probably experienced with the field and thus probably best suited for selecting referees or may even decide to review the paper himself (and then probably hand the paper over to another editor). In both cases and on average, this speeds up the review process and raises the quality of the reviews.
However, the editor may be biased regarding the importance of your work, which however is no real issue if the work is clearly over the journal’s relevance threshold. Moreover, the editor may overprioritise it and be more lenient towards it. All of this is in your favour and I would argue that dealing with this is the editor’s or some supervising editor’s job and not yours. One could however also argue that you should select another editor for this reason.
Just selecting another editor may however also have negative repercussions, as editor that raised your conjecture may feel omitted or you may be regarded as having been sloppy when choosing the editor and thus wasting the editors’ time.
Thus I would suggest to elaborate either decision of the editor in one brief sentence in the letter to the editor (or the journal’s equivalent), for example:
>
> Note that we did not choose X as a handling editor because we already suggested him as a referee.
>
>
> Note that we did not choose X as a handling editor to avoid conflicts of interest.
>
>
> Should our choice of X as a handling editor be regarded to cause conflicts of interest, we kindly ask the journal to choose another editor.
>
>
>
Upvotes: 3
|
2015/05/03
| 553
| 2,356
|
<issue_start>username_0: I'm currently studying for a computer science Masters degree in a top UK university and I wish to pursue a career in academia by getting a Ph.D. degree. I got perfect coursework grades, however, I didn't do well in my exams which count for 90% of the total grade in every course. Unfortunately, I'm 90% confirmed that I'll fail one of the courses and even if I resit it I'll only graduate with an overall pass due to the department's regulations related to failed courses. My current thesis supervisor, who also supervised my research course last semester, is very pleased with my work and offered me a Ph.D. position in a newly funded project. However the department's regulations admit only distinction holders, especially students graduating from the same department.
Is there any chance of me being admitted or is it impossible for a "pass" Master holder to continue their studies?<issue_comment>username_1: If your thesis supervisor has offered you a Ph.D position and the project is a newly funded one, then you are very likely to be accepted. I have seen that figure in a couple of top universities in Europe in which what counted more was the acceptance of the professor who will agree to supervise you. The other stuff, e.g. submission of grades and certificates from your M.Sc. is just administrative stuff. Try not to lose that opportunity and start talking with your future supervisor about your research work plan for your Phd studies, and funding (which I suppose you are entitled also for that) in case that you need it.
Good luck!
Upvotes: 1 <issue_comment>username_2: You need to talk with your thesis supervisor to find out whether the department's regulations are "hard" or "soft." Many departments have clear regulations which are "soft" in the sense that they can have small exceptions made if a professor can make a case for why such an exception should be made. This is very often the case, and if this is the case for your department and your supervisor wants you as a Ph.D. student, then you are fine.
Sometimes, however, a department will not allow such leniency for its professors. If this is the case, and your supervisor wants you as a Ph.D. student, then they should also be able to advise you on what to do to try to get yourself into compliance with the regulations.
Upvotes: 3 [selected_answer]
|
2015/05/03
| 568
| 2,511
|
<issue_start>username_0: This college, which I will not name here, specializes in computer-related programs, which makes its other courses, like political science, unpopular among student applicants. In one instance, the social science department only had three students who graduated with a degree in politics. Because of the very small student population, the school only had two political science professors who handled all the major subjects. Would you say that this is sub par college?<issue_comment>username_1: I can't comment about the unnamed college you are describing, because you haven't given enough information to make a judgment. But the answer to the question in the title is simply
No.
===
Upvotes: 3 <issue_comment>username_2: I think when you are talking about departments with only two permanent faculty that the teaching and research opportunities are necessarily sub-par. While even top departments have gaps in their expertise and some off-topic teaching is inevitable, with only two faculty members there are going to be substantial gaps in expertise. It is not even clear to me how two members of faculty can develop and maintain a complete curriculum.
Further, with only two members of faculty, a substantial amount of teaching will either be done by adjunct faculty, who will be essentially unsupervised, or in other departments. Both cases, I think reduce the quality of education.
With only two members of faculty, there will also be limited chances for gaining research experience.
Upvotes: 2 <issue_comment>username_3: If you're talking about MIT or Caltech, then the answer would be that their degrees in the humanities and social sciences are equally as respectable as those from any other colleges and universities in terms of graduate admissions (in social sciences/humanities).
Part of this is that we realize that folks who fall out of the mainstream at these schools merit some attention. It takes more of an effort and commitment to do sociology at MIT than it would at U-Michigan. For example, in order to get the credits to graduate, you've also undoubtedly taken courses in that discipline that were offered outside of that institution -- in MIT's case, you'd be taking classes at Harvard/Radcliffe. That takes dedication and forethought.
Finally, we realize that students choose their undergraduate institutions based on limited knowledge and choice. We weight their accomplishments based on what they were able to do with the resources that they had.
Upvotes: 3
|
2015/05/03
| 633
| 2,536
|
<issue_start>username_0: Recently, the [single-blind peer-review process](http://blogs.plos.org/everyone/2015/05/01/plos-one-update-peer-review-investigation/#.VUO6vdTtlT0.twitter) failed to appropriately deal with highly sexist comments. An anonymous reviewer provided a sexist review and the Academic Editor forwarded it on. They have since blacklisted the reviewer and asked the Academic Editor to step down. While I think that blind peer review provides useful protection for reviewers, are Academic Editors generally provided anonymity? Further, is there any precedence for when a journal should reveal the name of a reviewer?<issue_comment>username_1: >
> While I think that blind peer review provides useful protection for reviewers, are Academic Editors generally provided anonymity?
>
>
>
In my experience this is rare but not unheard of. For example, the [PNAS submission guidelines](http://www.pnas.org/site/authors/guidelines.xhtml) specify that the editor handling the paper will remain anonymous until the paper is accepted. Presumably this is meant to protect editors from retribution over a rejected paper. I'm not convinced this is necessary, but the existence of these policies indicates that someone must care.
>
> Further, is there any precedence for when a journal should reveal the name of a reviewer?
>
>
>
I'm not aware of any policy that allows journals to reveal the name of a reviewer without the reviewer's consent. It could be reasonable in a case like this, but I wouldn't want to be in charge of writing a policy delineating when it is or isn't allowed.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Anonymity, when used for any scientific role, is intended to make it easier for people to conduct honest scientific assessments. It is not intended to be a shield from which to attack with impunity.
In business, there is a concept of "[piercing the corporate veil](http://www.nolo.com/legal-encyclopedia/personal-liability-piercing-corporate-veil-33006.html)," in which the shielding of corporate liability limits is removed in cases of gross misconduct. Likewise, I think that it is reasonable to pierce the veil of scientific anonymity in cases of gross misconduct. This recent case of "please add a male author" is one such; others could include abusive personal attacks or plagiarism.
I'm not sure that exact boundaries of such a policy would need to be spelled out in advance: simply saying "anonymity may be breached in cases of gross misconduct" may be sufficient.
Upvotes: 3
|
2015/05/03
| 1,171
| 4,386
|
<issue_start>username_0: When publishing a paper, some researchers publish the source code used for the paper.
Is there any research/study/survey/... that looked at how much effort do researchers take to publish their source code? I.e. how many hours do researchers take to publish the source code for a given paper?<issue_comment>username_1: This isn't a peer-reviewed article, but nonetheless it's worth linking to because it specifically addresses your question, albeit as an n=1 case:
[Bruna, E. 2014 THE OPPORTUNITY COST OF MY #OPENSCIENCE WAS 36 HOURS + $690](http://brunalab.org/blog/2014/09/04/the-opportunity-cost-of-my-openscience-was-35-hours-690/)
In this blog post, a biologist <NAME> states it took **about 25 hours** of his time to appropriately document his code (associated with a research paper) to make it good enough for open source release on github.
Upvotes: 4 <issue_comment>username_2: I have a significant number of papers where we make the (nontrivial) code available through various means. Examples are here:
* <https://www.dealii.org/8.2.1/doxygen/deal.II/step_42.html>
* <https://www.dealii.org/8.2.1/doxygen/deal.II/step_43.html>
* <http://aspect.dealii.org>
* As well as individual algorithms and data structures used in deal.II that are discussed in a few of the papers referenced here: <http://dealii.org/publications.html#details> .
In all of these cases, documenting the code adequately to make it suitable for publication along with the paper was part of writing the code (like for all significant code, documenting should be part of writing it) and would likely have taken at least 2 days in the case of the tutorials, and maybe 4-8 hours in the case of some of the specific codes and algorithms. It's something one should do anyway, but even if one doesn't, doing it is not usually an overwhelming effort.
Of course, this would not apply to a code like ASPECT for which writing the documentation (such as the 230 page manual) is an effort that likely represents month of work.
Upvotes: 3 <issue_comment>username_3: This is a study that analyzes whether computer science papers include source code that makes it easy to reproduce their results.
<http://reproducibility.cs.arizona.edu/>
The study found that out of 601 papers analyzed, 139 included source code that could be obtained without contacting the authors, and the study's researchers were able to email authors to get the source code for an additional 87 papers.
Of the 226 papers the authors obtained source code for, they were able to configure and run the source code within half an hour on 130 papers, without contacting the authors on an additional 64 papers, and after contacting the authors on a further 23 papers. For 9 papers, the study's researchers could not run the source code at all.
These results don't show how much time researchers spend on making their source code available, but it does show how frequently papers are published with accompanying source code and what quality that source code tends to be.
[PLL](https://academia.stackexchange.com/users/1277/pll) made [an excellent comment](https://academia.stackexchange.com/questions/44732/how-much-effort-do-researchers-take-to-publish-their-source-code/44766#comment100954_44766). I'd like to add it to my answer in case it disappears later:
>
> Just to summarise: **the overall success rate should be seen as 217 out of 402**. Of the full sample of 608, 206 were excluded for some reason or another --- e.g. their results weren't based on code in the first place. 402 were left that *should* have contained code.
>
>
>
Upvotes: 3 <issue_comment>username_4: I disagree with your premise that it actually takes any effort at all.
Version control is needed anyways for collaboration with your coauthors, as backup and for version history, which GitHub, BitBucket and SourceForge offer for free.
Writing good code is necessary so hat your colleagues can understand what you are doing and even if it is a one-person project you need to understand it half a year later.
There are even additional benefits:
* increased acceptance chance of publications
* bug reports through issue tracking help with development
* increased exposure and more citations
So you only hurt yourself if you don't publish your source code which means there is actually negative effort, all things considered.
Upvotes: 1
|
2015/05/03
| 639
| 2,922
|
<issue_start>username_0: In some fields of research, huge collaborations are the norm. This is especially true in experimental high-energy physics.
Say I'm evaluating someone's resume, and that person is transitioning out of one of these fields into something else, such as a career in industry or teaching. I'm not a specialist in that field, and I'm not familiar with current research in it except at a very broad level. How do I tell whether this person was any good at what they did? I can do a literature search on the person's name, but that will just pop up a bunch of papers where the list of authors looks like this: "The XYZ Collaboration: <NAME>, <NAME>, ... [183 additional authors not shown]"<issue_comment>username_1: I have two answers.
First, if the person in of interest has many publications with a diverse range of coauthors, then you *might* be able to glean some insights by looking at the overall pattern. For example, you might be able to determine this person's areas of specialization and perhaps their focus in research. But these inferences depend on having papers that are sufficiently diverse that you can isolate interesting patterns that are common to all of them, but otherwise uncommon.
My second answer is that you probably can't derive any useful information from the fact that a person has been part of a very large collaboration team. Instead, you need more detailed and context specific information about their tasks, responsibilities, and performance. This might come from seeing their specific work products, from their performance reviews or recommendations from their supervisor or peers, or similar.
In a way, this is no different than the situation in industry where you might know that a person is employed at a National Lab, and maybe that they are part of XYZ Department, working on ABC Projects. You can't really tell much from that information about their individual role or individual performance.
Upvotes: 2 <issue_comment>username_2: Years ago, I was on the college executive committee when we had to decide whether a certain experimental physicist should be promoted (to tenure, if I remember correctly). Her papers were of the sort you describe, huge collaborations that gave us no information about the extent and quality of her contribution to the project. But that information was provided, in considerable detail, by the letters from external reviewers. (Here "external" means outside our university. In cases like this, the college waives its usual rule that so-and-so many letters must come from people who are not among the candidate's co-authors, because essentially everyone who has real information about the candidate's contribution is a co-author.)
Upvotes: 4 <issue_comment>username_3: I heard that the number of presentations they give (check e.g. [Indico](https://indico.cern.ch/)) is a better indicator than the number of papers.
Upvotes: 0
|
2015/05/04
| 1,400
| 5,413
|
<issue_start>username_0: I've noticed that this idea of a "meal plan" among students is pretty common in the US. Usually this involves:
* Some amount of money (e.g. "flex dollars") that can be spent on any food (or sometimes even other things) that the student desires, provided that the student spends this money on certain on-campus locations.
* The student may get a fixed number of "meal swipes" for eating at the dining hall.
* This is all paid for in-advance by students, often as part of a "room and board" fee.
* Some universities even require students to purchase such a plan if the student lives in certain parts of university housing, even at some urban universities where there might be more dining options. (I have heard students claim that it would cost the same amount of money as a meal plan to eat at neighbourhood restaurants.)
I have also seen some version of this in Canadian schools: for instance, there was a complaint circulating about poor dining hall management at [Memorial University](https://imgur.com/a/7Zhhy/embed#1), which alludes to a required meal plan. Generally speaking, as a student, it seems that such meal plans are chiefly used by undergraduates, although I've also seen graduate students eat at the dining hall.
I suspect that this system is specific to North America, although I'm not completely sure. For instance, I saw [this document](https://studyabroad.colorado.edu/_customtags/ct_FileRetrieve.cfm?File_ID=34662), which stated that the Hebrew University doesn't carry meal plans. I've noticed that university cafeterias in Hong Kong make students pay for food with actual cash when they get their meal, even if there are occasional loyalty schemes (of the same variety that might appear at any fast-food restaurant) or discounted items (where things are much cheaper than they would be off-campus). From a quick glance, HKU and HKUST's housing websites appear to make no mention of any sort of "meal plan". (Meanwhile, the University of Chicago's housing website has a page for residential dining.)
Thus: **is the meal plan system (where students pay in *advance* for dining hall meals and potentially other on-campus perks) a chiefly North American thing?** For obvious reasons, I would assume that it would be more common in heavily non-commuter universities (which perhaps dominate the US and Canada more than the rest of the world), but I'm wondering if this is *generally* true, even if we take the commuter/non-commuter thing into account.<issue_comment>username_1: This is not unique to North America and the idea of a meal plan exists in the UK. In the UK halls (i.e., dorms) are either self-catered or catered. Catered halls usually provide breakfast and dinner and in some cases either lunch, a boxed lunch, or points that can be spent on campus.
Upvotes: 3 <issue_comment>username_2: There is no such thing either in my native France or in Japan where I am currently located. You pay for your on-campus meals either in cash or using a prepaid e-money card, and it is certainly not required to eat on campus. (I suspect that introducing such a requirement would lead to massive protests.)
Upvotes: 3 <issue_comment>username_3: I am still not entirely sure about the boundary of "meal plans", but, based on the discussion in the comments, I will answer based on this partial question by the OP:
>
> I'm interested in is whether not paying an advance lump sum for regular meals is really as common outside of North America
>
>
>
As far as I know, German universities usually have no meal plans, if that means signing up (let alone being obliged to sign up) for a contract that equals a certain amount of vouchers specifically for getting meals (and possibly other paid-for services).
Usually, on-campus lunchrooms are run by student service organizations that are separate entities from the universities. These organizations are partly subsidized from tax money, and partly funded based on a solidarity fee that has to be paid by *every* enrolled student every semester (e.g. €60). As a result, meals in such on-campus lunchrooms (which often really just offer lunch, and only from Mon to Fri) can be bought at a "normal" price (comparable to *very* cheap restaurants, e.g. around €4 to €5 for one main dish) by guests, while anyone connected to the university gets a certain discount (e.g. there might be two discount levels, one for students (e.g. roughly €2 to €2.50 for one main dish), one for employees). This buying of meals, however, is spontaneous and can be repeated as often as desired, i.e. there is not a fixed amount of previously ordered coupons, and the transaction for the discounted price takes place only then and there in the lunchroom.
So, in a way, students have to pay beforehand, but it's not a payment that is 1:1 mapped to meals. Your mileage may vary on whether to consider that a "meal plan"-like system.
*The prices are examples that fit for some German universities, but certainly not all. I have added them to convey a rough idea of the extent of discounts and total payments.*
Upvotes: 3 <issue_comment>username_4: I think forcing students to spend part of their money on campus would be illegal where I live (and in most of the western world). I'm actually kind of surprised this is still going on in the US, especially given the US protests against the truck system in the late 1800s and early 1900s.
Upvotes: 0
|
2015/05/04
| 1,679
| 7,117
|
<issue_start>username_0: I just completed my senior year. I have been accepted to a grad school (say University X).
**CASE A**: I am quite familiar with the fact that one shouldn't change grad schools during a PhD. It is highly frowned upon and regarded skeptically. However my case is somewhat different.
I want to reapply for some PhD programs next year (fall). But, at the same time, I don't want to lose the only PhD seat I've got. So, I was thinking about whether I can go to University X this fall and apply for next fall (applications start this year's August).
If I get accepted, I would simply move to the other university. I obviously won't need any recommendation from professors of University X. It would be as though I had simply applied as an undergrad who waited one more year.
But my worry is, will this be considered as Case A? I am worried because if it is, then I may end up screwing up my relationship with the entire academia. When applying, I would be in the 1st semester of a PhD, which is not really a big deal. I wouldn't have any advisor at that point and would simply be taking a few classes.
Any suggestions?<issue_comment>username_1: I don't see anything inherently wrong with changing PhD programs sufficiently early and with good cause. I do, however, find it wrong to be secretive and conspirative about it. In other words, you should by no means lie that you were previously (or currently) engaged in a PhD program at University X. Also, you should be prepared to answer the admittance committee why you are transferring. I would say, that "yours is a better program" is not enough, you should present a more compelling reason (i.e. a particular research interest and desire to cooperate with some particular faculty at the new uni) and be able to back it up.
You are probably worried what University X will do when they find out about your application. Well, chances are, if you are not assigned to a specific project with a specific mentor (you say you are not) or you don't receive any funding, they wouldn't care that much.
If you are worried, that if you mentioned University X, they would contact someone there to inquire about you, that could happen, even without a mentor.
In short, lying to the new institution is very bad and could hurt your chances tremendously. Deceit is usually (with negligible exceptions) the death sentence in academia. The new institution could even not only refuse to admit you, but also contact University X and tell them about your misconduct. Now that, would mean nothing but trouble for you.
Upvotes: 3 <issue_comment>username_2: One key point: you don't mention anywhere in your question **why** you want to transfer. Because you say "some PhD programs", I gather you don't have a specific other program in mind, and I can only assume that you don't have some compelling personal reason to do so (e.g. living in a certain city so as to care for a family member) but rather are simply trying to trade up.
Most students who enroll at programs which are not the top ones in their field would like to (or should like to!) move to better ones if they could, but because transferring programs is a fair amount of inconvenience for everyone, the threshold for doing so is rather high. If you want to get into a better PhD program in year N+1 than you did in year N, then it would be reasonable to have a better application.
Trying to improve your application while newly enrolled in a PhD program is a bad idea. First of all, as everyone else has said, **of course you must disclose the fact that you are currently enrolled in a PhD program if you are applying to a different one**. The idea that you think otherwise is a bit alarming, not because this is such an evil thing to do but it shows how far away you are from understanding academic culture. Do you think that no one will find out that you are currently in a PhD program?!? Think again: likely all of your recommendation letters will mention this, for instance.
Moreover, the first semester of being a PhD student is a tough time to build your application: no one at your new program knows you very well (or at all), you haven't done anything much, and more than likely you are just absorbing the culture shock of a new environment and are not in a position to show superiority (and in fact most first year PhD students are fairly inept compared to other PhD students in the program: I know I was). This phenomenon comes up when first year PhD students try to reapply for certain graduate fellowships (like an NSF fellowship) that they were also eligible for as a graduating undergraduate: they have to get a mix of recommendation letters from faculty at their old university who probably have nothing new or better to say about them than the previous time around and from faculty at their new university who have trouble saying more than "Mr. X is a student in our PhD program -- isn't he? I'm pretty sure."
My advice is to do one of the following things.
* Ask the university who accepted you whether you can defer for a year. It is much better to ask this question up front than for them to find out later from someone else that you are trying to trade up. It is a fairly good bet that you will be able to do it: I would much rather enroll a student a year later if in that intervening year they figured out that they really want to come.
* Enroll in a non-PhD program either there or somewhere else: a non-degree program or a **master's program**. In the American system, getting a non-terminal master's degree is essentially the culturally accepted version of the kind of academic laundering that you seek to do. This is probably two years rather than just one year, and if you play your cards right you can actually improve your application and profile in that time.
Upvotes: 4 <issue_comment>username_3: Getting a PhD from a decent institution takes lots of time and effort. Unless you are working on a project that highly motivates you the experience will be more difficult and unpleasant that it is worth.
If you don't feel compelled to devote an important chunk of your life to investigating a particular dissertation subject, then the Master's degree route makes a lot of sense. You should be better able to choose a research project after a year or two in an academic department.
You need to look for the best place to pursue answers to academic questions that will keep you motivated for the next 5-7 years. And if you get to the point where you are choosing advisors, don't fail to find out everything you can about them from their former students.
If you can honestly tell a recruiter that it is your heart's desire to investigate so-and-so with Professor X then that will make you a much more attractive candidate. You will be able to honestly say that because you will have read all his/her papers on some subject area and you can think of nothing else you would rather do than help find answers to the remaining open questions. Spending years in grad school to get a PhD without being properly motivated is a recipe for misery.
Good luck!
Upvotes: 2
|
2015/05/04
| 2,357
| 10,209
|
<issue_start>username_0: I want to start a chapter in my dissertation by motivating a mathematical operator by showing why it is interesting to look at it and what I can contribute to understand it better. However, I actually need to introduce some mathematical objects in order to correctly state everything.
I think it is a rather bad Idea to start first with a section of introducing the mathematical concepts (like measure theory) and then start the actual motivation. But if I do it opposite, then I am at a loss for words.
For example, in my motivation I would need to use a additive-finite measure space, a operator, the space of mu-integrable functions and a stochastic process.
How would you suggest to cope with such a situation?<issue_comment>username_1: If you go deeply enough into measure theory and stochastic processes to actually write your dissertation about it, it is safe to assume that readers will be familiar with common concepts. So just assume that people understand what you write about. Do some handwaving if necessary ("we examine an interesting class of operators that are distinguished in that...").
Worry less about correctness than about telling a good story. After all, this is a motivational section. Don't include any definitions, or no more than one if it is *utterly* necessary. (And then, if you find that a definition *is* necessary in an introduction section, I'd argue that you probably need to revisit what you want to write in that section, until the definition is *not* necessary any more.)
Worry about correctness in the main body of your chapter.
Upvotes: 3 <issue_comment>username_2: When I was a Ph.D. student working on my own dissertation, I went to the university writing center for help and had a revelatory experience. The person working with me sat down with the first page of my introduction and effectively dissected it to identify the problems without understanding any of my technical jargon. They did this by reading aloud as we discussed, substituting blank/nonsense words for every piece of jargon, e.g.:
>
> Here we apply method X to determine whether adjective thingies can be made to wibble.
>
>
>
This type of substitution forces you to step back from the technical world that you have dedicated so much time and love to, and understand your narrative---or lack thereof.
In your motivation, you need to take a couple of steps back and ask: why does anybody care about *additive-finite measure space* ("frobs") and how it relates to the *space of mu-integrable functions* ("greebit-space") or a *stochastic process* ("wibbling").
You didn't pick these elements at random. There must be some reason why you picked them and how they relate to the bigger community. Are they intended to solve a puzzle that a lot of people care about? Or a small piece of such a puzzle? Do they unite two sets of concepts that people thought were different? Will they help understand string theory or give better tools for interpreting MRI imaging?
You want to be able to write something like this:
>
> People have wondered about how to better understand frobs ever since <NAME> first used them to pick the locks in Los Alamos. Although X, Y, and Z attempts have been made, none of them got very far because they were all green-colored. In this dissertation, I examine an alternate path, reducing the problem of frobs to the simpler system of greebit-space by means of an innovative application of wibbling. These results bring us one step closer to solving the problem of frobs, and how they can be better used to quickly and cheaply pick locks.
>
>
>
Now, what I've written is pure gibberish, and your motivation will almost certainly be much longer. The point, however, is this: your goal in a motivation section is to *motivate* by explaining that there is a problem that people care about and that you have an approach that gives at least a piece of the solution. Explain it in a way that your jargon can just be placeholders in the reader's mind, and it will be fine to leave the complex definitions for later.
Upvotes: 6 [selected_answer]<issue_comment>username_3: In addition to other good points made in the other answers, I think too often people overlook the question of the actual, likely audience/readership for a piece of technical writing. For example, it is unlikely that anyone without at least a rudimentary knowledge of your general subject would look at your thesis at all, so you can safely use the standard, basic terminology to give an introduction and overview of a given chapter. That is, it is not useful to imagine that you are explaining "from scratch" to someone who's completely unacquainted with the topic under discussion, since the reality would be that they'd not instantly assimilate "definitions" in any case.
In other words, contrary to what we sometimes may imagine, there is a *context* in which we write, and that context is most often richer than we acknowledge. Thus, the work is not to re-establish the basic context, but to make *larger* points. That is, as in the other answers, I don't want to hear delicate (and possibly pointless) semantic distinctions about word-use, but, rather, about *why* you are doing what you're doing, etc.
Upvotes: 3 <issue_comment>username_4: When I encounter this problem, I write the introduction as if the readers knew the concepts that I mention, but I include a parenthetical comment or a footnote, after such a concept, along the lines of "This and other concepts used in the introduction will be defined in Section 2."
Upvotes: 4 <issue_comment>username_5: Mathematicians have a tendency to train to hide away they tracks they used to take to get to their goal (apologies to <NAME>). This means that motivation is the thing they have been trained *not* to give. As compensation, they give examples, ranging from trivial to realistic to absurd following the definitions.
This is the situation on the ground. The reason is that mathematical objects are often obtained by so many steps of abstraction of originally natural-world concepts that their real-world origin is often obscured or very difficult to intuit (think the - very compact - definition of topology).
Therefore, it is useful to the reader to "recreate" the bridge to reality (which is often possible) and explain which of reality's features are required and which ones are discarded. Measure theory is not so bad in that respect. Basically, you are talking about a kind of "volume". In "nice" spaces, such a vector spaces, you could consider n-forms as volumes (almost literally), but if the space gets nastier, without a concept of tangent spaces and the associated structure, you have to look at which permits you to extend this concept to suitably selected subsets of your space. My favourite to asking the question what you miss if you have no measure is to respond with the Banach-Tarski paradox.
Now the game can also be played on a higher level if you talk to mathematicians who know already a lot of things. You now need to explain how *your* concepts will fit into what *they* already know. So, a group theorist may be motivated to look at semigroups by explaining which axioms you drop (and why). Or which phenomenon motivated your definition of semigroup (for instance attempting to model non-invertible operations).
**In short:** the point is to explain and to motivate what concepts and phenomena in "the universe of the reader" corresponds to properties discarded or generalised (abstractions) or newly studied phenomena in your universe.
Upvotes: 2 <issue_comment>username_6: It's a delicate balance. You say:
>
> ...in order to correctly state everything.
>
>
>
But why are you correctly stating everything if its just a motiational discussion? So you see you have a balancing act whereby you need to give up a little bit of space on the side of correctly stating everything in order to gain some space on the side of being able to flexibly discuss the concepts, ideas, history etc.
This is actually really hard and usually takes much more experience than it did to solve the research problem in the first place. So I think its common for e.g. a graduating PhD student to have the technical knowledge to solve the problem but to find it difficult to articulate where the problem lies within a much bigger field of inquiry.
As you gain more experience you will know *when and how to lie*. And you will also know much better what counts as standard. When you've just spent years learning the basics of a research field you often feel like things need definitions that don't really. Other experienced mathematicians are probably more comfortable than you think with not fully understanding every detail/remembering every definition but kind of vaguely knowing what such and such an object X is and vaguely what it does and just more or less getting the idea until the later point at which you define everytihng.
To try to give one piece of practical advice: Look for ways to not tell too big a lie. Find places you can say that 'an object X is essentially an object Y together with a parameterization of its involutions' (or whatever) where object Y is something you a sure is more standard.
One example that comes to mind from my education is distributions. I heard both of the following vagueries:
* Distributions are generalized functions.("OK right so I should think of them like functions")
* Distributions are like the abstract *dual* to functions. You pair a distribution with a function to get a number.
This confused me when I was younger. But after some experience I guess you know the ways in which these are both true and you get that *different contexts call for different lies*.
The readers who don't know the stuff well will essentially have no choice but to just swallow the lies. Then you get worried about the readers who *do* know the stuff well. Because then when you tell a lie, they might get offended, like "gah this writer has oversimplified and left out the crucial essence of object X; how will anyone get the important content from watered down motivational discussion!?" So like I said, it's a balancing act.
Upvotes: 1
|
2015/05/04
| 2,377
| 9,821
|
<issue_start>username_0: What strategies do you use to remember papers that you read?
I usually take notes, but when after a while I look at the notes, I don't remember most of the details. So I have to go back to the original paper and read many sections once again.
Are there better strategies for remembering key points from papers you read for a longer time? I am mostly talking about CS and Math papers in which details can be very easily forgotten.<issue_comment>username_1: As a CS major and programmer who has to look at Java's API for most anything past `System.out.println()`, I have learned that memorization can almost hurt more than it helps in CS. That pertaining to programming itself, and not to research and papers, my strategy for remembering information I read is to do an exercise pertaining to the information soon after, if not immediately after or even during, I read it. For example, if I'm reading about amortized analysis, I always have the associated homework or practice problems next to me, working on them as I read. I find that memorization is almost a self-defeating goal, as memorization for its own sake almost always fades away quickly for me, where if I apply knowledge or information soon after taking in information, it helps me to more completely digest the information and it begins to become second nature.
Upvotes: 2 <issue_comment>username_2: I get much more from a paper if I do a few things:
First, I try to connect it with the research program it comes from. One of my advisors is really incredible at giving a brief bio of almost any researcher in the field including their PhD advisor, the topics that interest them, where they might have done a postdoc, who their collaborators are, etc. All this stuff can seem kind of tangential until you start to see the connections between researchers and programs. At that point, it becomes an extra associative dimension to help you recall the research itself.
Second, I try to work in a discussion of every important paper I read with an advisor. When I'm approaching a deadline and reading a lot this doesn't work, but in more relaxed periods I try to set a plan in advance (e.g. 'Next week let's talk about these three hippocampal modeling papers') so that I know as I'm reading I am going to need to remember both big ideas and technical details. Talking through the paper lets you relate it to other ideas in the field, compare and contrast, expand upon the ideas, and critique things. All of these are good on their own, but also useful because they offer more associative threads to tie the paper into a web of knowledge.
I'm sure there are other strategies. For really important ideas, username_1's answer probably is best, since you need to work through them on your own to fully understand the subtleties.
Upvotes: 3 <issue_comment>username_3: I try to give a brief summary to every article I read. I highlight and make notes on the paper and transfer this from analogue to digital with Mendeley.
In Mendeley there is a 'notes' section, where I post notes about the article. This includes:
* Hypothesis
* Interesting methods
* Important conclusions
* Thoughts for the future
Maybe this is something you can use as well? This is in medicine/biology by the way, more concept based than math based..
EDIT: Here is my full list (which I never manage to fill in completely, but it helps):
>
> Complete citation:
> Key Words:
> General subject:
> Specific subject:
>
>
> Hypothesis:
>
>
> Methodology: Result(s):
>
>
> Summary of key points:
>
>
> Context (how this article relates to other work in the field; how it
> ties in with key issues and findings by others, including yourself):
>
>
> Significance (to the field; in relation to your own work):
>
>
> Important Figures and/or Tables (brief description; page number):
>
>
> Cited References to follow up on (cite those obviously related to your
> topic AND any papers frequently cited by others because those works
> may well prove to be essential as you develop your own work):
>
>
> Other Comments:
>
>
>
Upvotes: 4 <issue_comment>username_4: The advice I was given by my supervisor is to write about a paragraph on each of the four following points for each paper you read (note, this is from a very CS perspective):
* **What's the context for the paper**, in other words, what is the issue that the paper's trying to tackle, and what's the prior work in the area, or the work that the paper's trying to build off. For word published as parts of bigger projects, what's the overall aim of the project, and how does the paper fit in.
* **What's the key contribution of the paper**, i.e. what is the central idea, or improvement that the paper presents over the state of the art. I (personally) find this one of the more important sections (especially in terms of remembering papers later), as this is where you need to really hone in on what the code idea behind the paper is, without all the extraneous implementation detail/proofs/arguments.
* **Criticisms of the work**, no research is perfect! I'm personally terrible at looking back at papers I've read with rose tinted glasses, leading to a little bit of imposter syndrome. Having a list of criticisms of the paper can help to dull that, as well as providing a good counterbalance to the "key contribution" section.
* **Any other thoughts**. As the purposes of the four points is to help remember the paper later, it's also important to make a note of what you personally thought while reading the paper. Really liked a sidenote they made? Make a note of it. Found an corner case you don't think they cover properly? Make a note of it - you could turn it into research later!
I usually find that I take about 10 minutes writing notes on each paper, along with about the amount of text that I've written for this answer. Your mileage as to how much you want to say about each paper may vary however.
Upvotes: 5 <issue_comment>username_5: Take notes. Information passed though brain to hand, especially if you take the time to think about it and rephrase, ask additional questions, etc., is a bit more likely to be remembered. Reading it aloud, "in character", also sometimes worked for me but requires privacy.
Upvotes: 1 <issue_comment>username_6: Expect to go back to the original for matters of substance, otherwise a 1-bit error in your recollection could make you suffer (e.g. changing the sign in an equation).
Concentrate only on remembering the major author(s) and affiliation (easy way to mention it to someone else), how you would find it again, and the key new stuff (for you) which you should find in the abstract. If there's an important equation/algorithm/method etc. then you will want to write it down, but that's not so much *remembering a paper* as learning something new *from the starting point of a paper*.
**Edit:** I should note that I vastly prefer reading papers on paper and can therefore make margin notes and file particularly interesting ones.
Upvotes: 2 <issue_comment>username_7: Here's some good rules of thumb I saw (or did I hear it? :) ) and remembered (amazingly enough). Can't remember where though...:
```
What you hear, you forget,
what you see, you remember,
what you do, you understand.
```
"Jokes" aside, what worked best for me at least while studying and trying to remember things was to try and explain/retell to someone who really is not in your field.
For instance, a sibling or parent etc. The hardest part is to get them to agree to it, but hey no one said education was cheap (sacrifices and returned favors are to be expected in some cases). Perhaps they may put you to the same use for their similar needs :)
It is easier if you can incorporate your theory into what would be a really exciting example for the person in mind. Trying to Re-tell something advanced in an understandable and exciting way to someone who does not know very much about your field does *Wonders* (at least it did for me).
Because when you have read a paper -> written notes -> finished the exercises -> feeling you "basically understand" what you've studied, the final nail in the memory-coffin is to teach what you think you've learned.
Because once you have managed to successfully explain to someone else (which may take time but is kind of very fun as well and very well spent time!) you will probably have twisted and tried to anser questions, finding alternative ways to explain, simplify, and come up with real-world examples to make the counterpart get some kind of feeling (if not fully understand it, which may not be expected of course), that you quickly realize which parts you had to check up in the book/papers again.
This is no easy task but it is very funny (I think so at least, like to teach and learn) and my heasd usually got really tired afterwards (you know "tired" like when being up late studying and suddenly the brain gets "high" and youtube clips suddenly become very funny and all). This is because it puts your brain to hard work during quite some time, but the key is that it one likes it so it is fine and the brain will remember more than one may imagine.
I shall not share my whole life story, but I can tell you that this was without doubt the most effective (and probably efficient) way for me to not only remember, but more importantly to also understand and get new perspectives on the subject.
Upvotes: 2 <issue_comment>username_8: The main reason why people forget what they read is the lack of rehearsal.
I used a notebook to keep track of the important notes I've read.
Some apps are also great for that: i.e. Evernote, Readult or any note taker.
While on the road (for example, on my way to the office) - I know always take time to scan through my notes, to keep them in my head.
Upvotes: 1
|
2015/05/04
| 797
| 3,631
|
<issue_start>username_0: Some people have reasons to want their identity protected from a Google search, such as a dangerous stalker ex-spouse.
When you become a grad student, assistant professor, etc. at a university, and if the university wants to put your name on their website (directory, course catalog, departmental page, to name a few), can you refuse it or have them at least put out a pseudonym instead, assuming you have been publishing or contributing to articles (or intend to) with that pseudonym?<issue_comment>username_1: I think you have to accept the reality that this is not practically possible. Your name will appear on dozens of websites in various ways -- your own website, your department's website, the university's website for news and announcements, the university's class catalog for the current and previous years, the university's records of external grants funded, the university's accreditation reports that include publication lists, your university's open records disclosures of state employee salaries to open websites, etc etc etc. This will, in all practicality, involve dozens of people who take your name from electronic databases, paper records, payroll systems. Many websites will be automatically generated as well from electronic databases. Even if state law gives you the right to opt out of something like this, you will almost certainly find it impossible to actually do so in practice given that you have no idea where all your name is listed and who to contact to get it removed or changed. You also have to balance this with the university's very reasonable interest in allowing students to find information (such as teaching schedule, website, course information, etc) about their teacher.
I don't really think you have any other options than legally change your name permanently.
Upvotes: 1 <issue_comment>username_2: This is commonly done in several types of cases:
* by women who want to retain using their maiden name professionally while switching their legal name to their husbands
* by students from countries who have taken a hostile stance towards the United States and who want to protect their extended families back in their home country
* By foreigners with impossibly long (from an American standpoint) first or last names or who use non-mainstream variations on their name
* By faculty who go by their middle names or nicknames not their formal first names
* By aforementioned stalking victims
If you are at a large enough institution, they are prepared to handle these types of situations. My (large enough) institution has the following:
1. Fields in the university directory database that allow for 'nicknames' to take the place of names. Because all of our e-mails are <EMAIL>, this allows for e-mails to arrive properly.
2. Flags in the university directory that particular information should not be publicly released. One can prevent the listing altogether if you choose.
3. The only other place where names would appear would be the faculty/staff/student website, which is under departmental control and easily changed.
One of the doctoral students that I work for is from one of the aforementioned hostile countries and you will not find any web presence for her on the departmental website.
This gets trickier when you are faculty, as students need to enroll in your courses and you need to publish. I would recommend in this case either a full legal name change or pushing your university/department to recognize a pseudonym. It helps greatly if you have a common last name and only need to modify your first name.
Upvotes: 4
|
2015/05/04
| 3,423
| 14,556
|
<issue_start>username_0: A professor has a habit of starting oral examinations by asking for or looking up grade averages and grades from other subjects and then taking them heavily into account when evaluating the student in question. For example, I was allowed to be even considered to get an A in Real Analysis III, because I got an A in Real Analysis II and had a good overall grade average.
Is it ethical or helpful to ever consider student's performance outside the course? If not, what about subjects tightly interconnected? But then, most mathematics is interconnected in one way or another...<issue_comment>username_1: What you describe is, in my opinion, horribly unethical!
Yes, past performance is often a predictor of present performance, but there are so many other factors involved as well. What if a student did poorly before because of any number of reasons, but have since stepped up their work, caught up, and really mastered the material? Or what if the student has been focusing heavily on this subject and has consequently fared more poorly in another subject?
It is manifestly stupid and counter to the entire notion of education to ignore good work by a student in one location simply because the student did bad work in another course or another time. Ethical violations are a different matter---it is reasonable to be suspicious of a student with a history of cheating---but grading a student poorly for having the audacity to exceed expectations is a rank betrayal of the most basic responsibilities of an educator.
Upvotes: 8 [selected_answer]<issue_comment>username_2: No, I do not think that basing grades on *past* performance is fair. Grades should be based on the performance being graded, and it should be clear up front what exactly is being graded.
(Of course, if the syllabus explains that a grade will be based on homework and a final exam, then of course the grade would fairly be based on the final and the past performance in homework.)
---
One possibility for using past performance would be an oral exam. If I know the candidate sitting in front of me is very good or very weak, I might ask more targeted questions - focusing on basic definitions for a weak candidate, or on advanced understanding for a strong one, to avoid wasting everyone's time with questions that are too easy or too hard. If (!) the oral exam can then follow the natural development and the examiner adapts the difficulty of subsequent questions to the candidate's performance on earlier questions, then I'd argue that this kind of "customization" is acceptable.
Upvotes: 5 <issue_comment>username_3: In general, what you're asking is an unethical practice. I could see some exceptions to this where such considerations would be reasonable:
* You are taking a multi-semester course sequence, and the instructor is willing to base final grades on progression and improvement, rather than a strict numerical average. For instance, if you've improved from a B- to an A over the course of the year, your final grade in the second semester would be an A instead of say a B+ average.
* You are being judged for something like a qualifying exam, in which performance in coursework over one's career can help to ameliorate poor performance on the exam itself.
Upvotes: 4 <issue_comment>username_4: This scenario is ambiguous.there could well be and ethical abuse that the instructor is not doing his/her job, but the use of past performance as one input in the grade decision is not inherently an ethical abuse, depending on how one understands grades.
There is not widespread agreement about what grades measure.
Depending on whom you ask, grades might measure:
* how much a student learned in a course
* how much a student knows at the end of the course
* how much effort a student put into a course
* how well students complied wth course procedures
* what a student is capable of doing at the end of the course
And these comparisons could be relative to a fixed standard, or relative to other students in the class ("on a curve").
All of the above have been used in courses I have seen. I'm afraid it seems that grades in a course mean whatever the institution or instructor define them to mean. There are some practices that would be widely condemned as abusive, but grading based on total proficiency and using past achievement as part of that assessment seems to be to be within the realm of defensible practices.
If the standard is an absolute level of proficiency in a body of material, past work in the field is a helpful piece of information about a student's likely level of proficiency. In other words, if a grade in course N is intended to reflect how well a student can do tasks x and y, and tasks x and y require skills from course N-1, then it seems reasonable to use this information.
This is not how I would advocate grading, by the way, but it seems to fall within the universe of acceptable practices. If a professor believed that the only way s/he were comfortable asserting that a student attained the level of proficiency s/he believed warranted an "A" grade were if the student previously demonstrated mastery at the "A" level of prerequisite course material, and also had high overall mastery of other material as demonstrated by a high GPA, this decision strikes me as unfortunate but not unethical.
I would hope an instructor would allow other ways to demonstrate that mastery besides grades in a previous class; this example was unclear whether that was the case. When the instructor told the OP that an A was only in consideration because of past performance, I do not know if this was a general requirement, or the specifics of this case based on other assessments in this class.
If the instructor views grades as a certification to the student's future instructors, employers, clients, etc of the Instructor's assessment of the student's proficiency with course material (far from the only meaning given to grades, but one widely accepted one), then this can be justified as ONE input in the grade decision. Given the set of observations about the student (exams, projects, homework, etc), how likely is it that the student has at least (excellent/good/fair/poor) mastery of course material? If the goal is to give the hugest grade the instructor an justify with confidence level x, past performance is an I out that adds information to the decision function.
If the instructor based grades ONLY on what the OP described, that would be shirking the responsibility of the instructor. If the instructor said that the student's work in this course alone was not enough to definitively convince the instructor that the student was at an "A" level of mastery of course material, but that work combined with evidence of the student's past work raised the confidence level enough to justify certifying to future encountered of the student that the student mastered the course subject, that could be ethical.
Upvotes: -1 <issue_comment>username_5: I **disagree** with the notion that limiting marks based on past performance is necessarily unethical. If done without a valid reason (for example as a case of laziness on the professor's side), then it would very well be unethical, but there exist cases in which such a system is perfectly reasonable.
One reason that would justify the professor's system of grading would be one similar to the [GRE's adaptive testing system](http://magoosh.com/gre/2012/is-the-revised-gre-adaptive/).
If the professor knows that the student did poorly on his/her past course, the professor can then choose to administer an easier examination, which better distinguishes students of medium to low ability from each other. Similarly, if the student managed to obtain an A for the past course, the professor can then administer a more difficult exam to better distinguish students of relatively high ability from one another. Students who are given the "easier" exam would then not be able to achieve the maximum scores in the final grade.
This would be analogous to the GRE's system which uses the results of the first section to produce adaptively a second section which is scaled to an appropriate level of difficulty.
Upvotes: 2 <issue_comment>username_6: I think this is unethical. All instructors are vulnerable to bias and subjectivity in grading and ought to do everything they can to minimize that; looking up a student's previous performance introduces a strong source of bias that was completely avoidable. An instructor who looks up a student's previous grade on an exam is going to go into the exam *expecting* the student to do well or poorly based on that previous grade, and people tend to be biased towards seeing what they expect to see.
I do understand why it's *tempting* for instructors to do this, though. I am a part-time instructor myself and have been a TA for many professors, so I see professors do things like this, and have been tempted to do it myself. Psychologically, I think the reason is that instructors are aware that grading is somewhat subjective, but that is also something that makes them uncomfortable. And they tend to feel uncomfortable if their grades look random: if people who did really well last semester do really poorly with this time, or vice versa, or if a student got an A- on one paper and a B- on the next and then an A on the next. Checking what grade a student got last semester, or on the last paper, *feels* like a way of double-checking your own grading, of getting a second opinion to double-check your sense of the student's performance on this particular task.
But the problem is that, far from being an effective way to deal with bias, the above procedure just *magnifies* its effects. The way it works in my experience is that if the student got a high grade last time, it's tempting to think, "Well, they're smart/a good student" and then give them the higher grade; if they got a low grade last time it's tempting to think "Well, even the lower grade is still better than they got the last time, so that's fine," or "This student is just lower-performing," and then give them the lower grade. And that's just not fair. But it feels secure and reassuring to an uncertain grader to know what other graders have given a person before assigning them their own grade, so unfortunately it happens.
Upvotes: 3 <issue_comment>username_7: This reeks of typical halfassed professory.
Instead of working to create an assessment (ie grading) method wherein he can comprehensively determine how well a student understands the topic, this professor has willingly created and accepted a sub-par assessment which requires reviewing previous performance.
"Well, the student failed my test... however, on related courses they passed. Therefore I must have a flaw in my assessment or my course, so I should let them pass because they *should have* understood it and my methodology can't tell me whether they *did* understand it."
That's what I'm reading as the inner dialogue.
However, at the same time, I love it. This is a very realistic method for complex topics which are not readily conveyed and/or the understanding of which are not readily assessed.
And then I come back to hating it again, because *that's the professor's job*. Professors generally don't like the course work and teaching and testing... too bad. I never liked folding shirts when I worked at a retail store, and I never liked NBC warfare training during my military service, but they were part of my job. I didn't get a pass just because I "don't like" them.
**So yes, it's extraordinarily unethical!!**
Not only because it's not giving you the grade you *earned*, good or bad, but because the professor is doing this in lieu of a portion of the job they don't like/find hard. The university (and thus, you/your creditors) is paying them for a job they're not actually doing! That's unethical; that's fraud.
But it's quite clever and realistic, so maybe... :(
Upvotes: 2 <issue_comment>username_8: You should check your professor's grading criteria against his grading rubric - if 'prior courses' is not listed on that rubric, it's not only unethical, but duplicitous and could be brought up to the Dean if your grade or others' grades reflect poorly because of it.
If it's not on the rubric, you could still bring it up with the dean as being duplicitous, but there would be less actual evidence to bring against the professor, and since it's an oral exam, you'd be hard-pressed to provide proof without some type of recording of the exam itself.
If it's on the rubric, you could bring it up with the Dean and ask if it's really allowed for professors to do so, but if they say 'yes' then you're completely out of luck.
However, it is **completely unethical** regardless of what the dean says - you may have some paths of recourse for it, but regardless of whether or not those pan out, it is still unethical, and you have every right to feel it is unfair.
Upvotes: 2 <issue_comment>username_9: Although I agree it's unethical, similar patterns are sanctionned by the academic world.
To play devil's advocate, maybe your teacher is adequatly preparing you to the way things work in higher education ?
One of the most important criteria to get funding, a bursary , a grant, a publication or even a TA job is if you had that same sort of thing granted before.
It's a vicious circle where if you've never had one of these, chances are you'll never get one and therefore probably won't have the extra "humfes" (jobs, contacts, money, opportunities) necessary to be in the top of your field.
When you think about it, it's what a cv is for. While looking at your profile, a prospective employer looks first at what you've done –IF you've done the same job you're applying for, your odds are improving.
Upvotes: 0 <issue_comment>username_10: It's very unethical in my opinion. I quickly explain my way of handling things when I examine students:
Questions and answers for oral and written exams are set before the exam. For each question I expect certain key-content to be mentioned. If all is mentioned, it's full score for that question, if not, the score depends with mathematical precision on the amount of content mentioned compared to the amount expected.
After an exam I publish questions and key-content. Every student can check what he did wrong and how his grade was made. I find this the only fair way: If you know what's expected, then you have full score and you have the right to know what you did wrong.
Upvotes: 2
|
2015/05/04
| 285
| 1,263
|
<issue_start>username_0: Is it appropriate to include interviews (published online or in print magazines) in an academic CV?
What about articles/news published about your works? I mean brief (one paragraph) news that professional magazines publish about significant research articles.
If yes, how do you do this to avoid exaggeration?<issue_comment>username_1: It depends, but yes, I would include media exposure in a long form CV. This could be advantageous for grant applications, because it might be an evidence that you can generate a higher impact.
For each media exposure, just put one line as a citation format. If you have more than 10 such items, select only the higher impact ones, and merge ones with similar content.
Alternatively, you can put it is a footnote to your relevant publication.
Upvotes: 3 <issue_comment>username_2: Officially no need to include newspaper or magazine article containing an interview about the results of your achievement in science and technology in an academic CV, but it can be a plus factor when you're applying for a grant or scholarship where other applicants quite competitive. It will gives the reviewer impression of your commitment to areas of interest and the society impact of your researches.
Upvotes: 1
|
2015/05/04
| 445
| 1,936
|
<issue_start>username_0: One of the key factor in the resume of an academic is the amount of research funds he was able to secure.
Most research funds go towards human resources, for example, postdoc fellows and PhD students. In my resume, I write the total funds I have received. A significant part of these funds is paid to my research fellows.
However, sometimes my institution hires staff for me. For instance, if my university gives me credit to hire postdoc fellows for my research, this is also a kind of grant that I have earned, but it is not a value to be summed (to estimate total funds I've secured).
**Questions:**
* The list of funds we secured are normally external funds provided by funding agencies. How do you calculate internal funds granted to you?
* If your research institution grants you special access (as a reward to your achievements) to human resources or facilities, but no money is transferred, how do you estimate the total fund granted to you? And is it appropriate to do so?<issue_comment>username_1: It depends, but yes, I would include media exposure in a long form CV. This could be advantageous for grant applications, because it might be an evidence that you can generate a higher impact.
For each media exposure, just put one line as a citation format. If you have more than 10 such items, select only the higher impact ones, and merge ones with similar content.
Alternatively, you can put it is a footnote to your relevant publication.
Upvotes: 3 <issue_comment>username_2: Officially no need to include newspaper or magazine article containing an interview about the results of your achievement in science and technology in an academic CV, but it can be a plus factor when you're applying for a grant or scholarship where other applicants quite competitive. It will gives the reviewer impression of your commitment to areas of interest and the society impact of your researches.
Upvotes: 1
|
2015/05/05
| 1,419
| 5,908
|
<issue_start>username_0: I'm in the following situation. My first 7 months out of college, I worked software job I really didn't like, and then I hopped on to another job I like. I've been in that second job for about a year now, and even though my career is doing pretty OK, I'm thinking about going back to school for a MS and maybe a PhD. I have some concerns about going back to school that I was hoping I could get some insight on.
I didn't get funding for graduate school because I did no research as an undergrad and thus had weak references, but tuition for one of the better programs I got in to is rather low as its a state school and I'm in state ( Umass amherst). I was making a bit over six figures for both of my jobs, and I've saved up more than enough money to cover both my undergrad loans, and pay the entire cost of the master's program + living expenses.
I want to give grad school a try because I've gotten really into computer science these past two years, and I'd like to study it full time for two years while doing research. In college, I was too busy dealing with a deep depression and health problems to excell as a student to the degree I know I'm capable of, and I want to have the academic computer science experience I dreamed of when I was in highschool but was robbed of due to circumstances.
I am, however, concerned about my employability post-MS. I will have had two short stints on my resume - a 7 month job and a 19 month job - making me look like a disloyal job hopper, and I'm afraid that potential employers might be turned off by my over-education should I choose to go back to programming. I'm also concerned that my department might not treat me with respect as a self-funded student. If they thought I was good, they would have probably offered me funding.
How are MS degrees viewed in the industry? Are my concerns valid at all?<issue_comment>username_1: I can respond to one of your questions:
>
> I'm also concerned that my department might not treat me with respect as a self-funded student. If they thought I was good, they would have probably offered me funding.
>
>
>
Nothing to worry about here. Your professors won't care whether you have an assistantship, and they probably won't even know whether you do or not.
If you decide to postpone starting your degree by one or two semesters, you could register for a class either locally or online. The key is to take it as a "non-matriculated" student, meaning that you won't be taking it as part of a degree program. (You can transfer it later and ask for it to be evaluated for possible partial satisfaction of your degree requirements at the institution where you end up.)
The stand-alone course could result in a nice grade and a nice recommendation letter, if you're lucky. However, the main reason to take it is to start to satisfy your intellectual itch.
I don't know whether you'd have a problem getting employment after the degree. If that does pose a problem, I suppose you could neglect to include the MS in your CV.
Upvotes: 2 <issue_comment>username_2: I would take any advice about your employability post-MS you get at Academia.SE with a large, large grain of salt. Most people here are in academia and are not really qualified to judge your employability in industry.
In fact, your question may well be closed as off-topic here. If so, you may want to flag it for migration to [Workplace](https://workplace.stackexchange.com/questions).
That said, I see your point about having rather short stints on your CV after the MS. I agree that this may leave a bad impression. I'd suggest thinking about postponing your MS until you have two or three years' tenure at your current employer. Anything beyond two years should make it clear you are not a job hopper.
Do you have any colleagues, friends or acquaintances that went back to school after a few years in industry? What do they say? Did they typically have to explain their decision to potential employers after the MS? How did they do? (If you know anyone who was employed, went back for an MS and is *unemployed* now, buy him a beer and have a *long* talk with that person to learn anything you need to avoid.)
During your MS, keep your network current. Your current employer may take you on board again, or at least provide references. Make sure they still know you two years later. Think about doing an internship or so.
Or think about staying in academia, if you really find that the academic and researchy side of CS appeals to you.
Upvotes: 2 <issue_comment>username_3: One recurring issue I've seen in folks considering a CS M.S. or Ph.D. is a belief that the advanced degree will qualify them to employers as some sort of "super-programmer". This is generally not the case, and I think it's where folks get the notion that an advanced degree can hurt your job prospects. In my experience very few companies are interested in hiring folks with advanced degrees for jobs that can be handled far more cheaply by someone with no degree, but a lot of experience.
Where an advanced degree becomes helpful is in domain knowledge. Somebody with a C.S. Ph.D. may or may not be a decent web site developer, but there is supposed to be some (tiny) area of CS where they know as much as anybody on the planet. If they can find a company that needs that particular bit of knowledge (or something closely related), they have a good shot at a well paying job.
Once you have an advanced degree you shouldn't be looking for "programming" jobs, you should be looking for jobs as an expert in a topic like machine learning, compiler code generation, agile methodology, high-availability databases, etc. etc. There aren't as many of these jobs as there are "programmer" jobs, but there is also a lot less competition for them. Lots of excellent programmers are self-taught, but not so many experts in convex optimization.
Upvotes: 3
|
2015/05/05
| 1,483
| 6,503
|
<issue_start>username_0: I am a CS undergrad student who is just getting started with research and reading papers. My mentor has asked me to read a few papers. While reading I noticed that every few lines there are references to previous papers being cited and the entire paper just builds on the previously described architectures and then adds something new to it.
In such cases, if I start reading the cited ones (and there are many mentioned over and over again), I am afraid I might get stuck into a loop of such repetitions and never get to the main idea. How should my approach be ?
My mentor wants me to thoroughly understand the ideas stated here because we are going to be implementing those soon albeit with modifications. I know of the 3-pass approach but that does not solve my particular problem.
Note that my particular doubt is not on *how long* should reading take. I can see many resources explaining that very well already. So I don't think this is a duplicate of them.<issue_comment>username_1: What you are describing is pretty normal in academia. People take on old and existing work, and build upon it.
There are [typically several reasons](https://academia.stackexchange.com/questions/44374/what-are-we-expected-to-do-when-citing-literature/44398#44398) for citing prior studies, thus the impact of each individual citation might (and usually does) vary as well. I reckon this is all quite new to you. In your case if you can understand the work presented in the paper without diving into the citations, then just do that and ignore the references for the time being. You can always trace back to specific (read: not all) references later on to fill in any potential gaps in your understanding.
If you cannot grasp the ***why*** or the ***how***, then my suggestion is to pay more attention to the introduction section, checking out the important (often **recurring**) references, and then re-reading the introduction. You should get a fairly good idea of what the paper is really about this way.
Another approach could be to try to discuss the paper with your supervisor or someone else who might be able to judge it better (any grad students or TAs?)
Upvotes: 5 <issue_comment>username_2: There isn't anything surprising or unprecedented about references to earlier works, and the authors building on earlier models/structures. Additionally, not every article (in any field) is going to be pedagogic. While that could be due to the author writing style, it is not always possible to keep everything at a very basic level too. This is true, in particular for the stuff like *letter publications*, or *rapid communications*, where there are limits on manuscript length. Also, when there are existing pedagogic review articles, or otherwise detailed accounts which could pass off as pedagogic, authors generally tend to avoid a detailed presentation, simply cite them and get to their main point.
Since you say that
>
> My mentor wants me to thoroughly understand the ideas stated here because we are going to be implementing those soon albeit with modifications.
>
>
>
it is clear that he/she wants you to understand the basics thoroughly, and not just be limited to the arguments presented in the one article you have at hand. So, if this article appears cryptic, you have no choice but to go through the cited ones for clarifications. Try to see if they are less cryptic, it is possible that the authors may have cited a review article of the sorts I mentioned above. That will certainly help. Based on that understanding, when you revisit your particular article at hand, you will understand it much better and better understand what it adds to the subject.
However, if that doesn't work out, and you still find the arguments too cryptic, consult your advisor and explain why you find it cryptic, which arguments are not transparent etc. It helps to carry along with you the attempts you made towards understanding it yourself, so that he/she knows you are not asking to be spoon-fed. Once he's convinced that you made an effort from your side, he will generally point you towards other articles which could clear up the specific point for you.
I mean, there is no other choice. There is no way you can gain anything out of this collaboration, or implementing anything, unless you are clear about what you are doing.
Upvotes: 2 <issue_comment>username_3: Research papers build on previous work, so it can be hard to get started. In addition to the excelent suggestions already given you can also look for a "review article", or ask your mentor if he knows such an article. In many disciplines it is common that so every once in a while a (senior) researcher publishes an article that summarizes and reflects on the current state of art concerning a particular question. Such articles are very good for getting started in a discipline, as long as you keep in mind that such articles are not necessarily a neutral representation of the field.
Upvotes: 3 <issue_comment>username_4: The abstracts of those linked articles should be sufficient to tell you whether they are being cited as a major contribution to the work you are interested in, and abstracts should be written to be quick to read. With experience you will find that some referenced papers can only need skim-reading, others can be left to last based on the title, and a few will need detailed follow-up (though you have to learn when to stop). Groups of references will often not need full attention paid to all of them.
Here's a little example, from a hypothetical paper about producing a better model building on previous models:
I could guess that in "methods based on the *xyz* technique[17-23] have been found to consistently underestimate reality in this region", references 17-23 probably won't need reading (in detail); in "the *abc* method[42-45] is a good estimator at low input values, while the *pqr* method[46-58] is more applicable for high values" I'd expect to read some or most of references 42-56. If the author felt helpful or their mind worked this way, 42 and 46 would be good places to start, and in any case you should pick up an understanding without actively reading all the references in the range. If the paper carried on to say "previous attempts[63-79] to combine these two approaches have shown some success, however..." you'd probably have quite a bit of reading to do, as you're presumably interested in the combination as you're reading this paper.
Upvotes: 1
|
2015/05/05
| 469
| 1,896
|
<issue_start>username_0: I submitted my thesis some days ago and I am currently preparing for viva. I have found an error in a figure which is a summary of four other figures. In fact, by means of Excel I did calculations and made a mistake regarding some numbers. I have prepared a new file with corrections but I confess that I am scared.
Anyone has experienced such a situation or may give any tip about what to do.
Thank you very much indeed<issue_comment>username_1: Speak to your mentor. However, I see no reason to panic or to be scared, if the mistake doesn't influence some fundamental contribution of the thesis. I've found numerous cases where mistakes in the thesis were found years after the defense, even with considerable impact (one extreme case de facto made ~60% of the thesis obsolete aka wrong).
If the thesis is completely printed/prepared/submitted, I guess that the mentor won't insist on repeating the process.
Again, this is not a great issue, if the error is not of fundamental nature (e.g you base your thesis that the actual value of pi is 4, but somehow you miraculously discover now that you were of by 0.8584).
Upvotes: 3 <issue_comment>username_2: Don't keep it a secret, but don't walk in and start apologising for it either.
If the discussion in the viva turns to the error, be honest about it. If it turns to that figure, I would raise it proactively. I'd also have a corrected copy with me if possible. There's no such thing as a perfect thesis (including when the final copy goes to the library) so make use of the opportunity to fix everything you can. I found nonsense (but not as bad as misleading) wording in my readthrough the night before the viva, as well as typos, none changing the outcome but some in equations. I brought a list and mentioned it at the end of the viva, and they were accepted as needing fixing with the corrections.
Upvotes: 1
|
2015/05/05
| 1,005
| 4,648
|
<issue_start>username_0: Many of the research papers that I have read are not dated in terms of publication date. By dating I mean including at least the publication year. The papers I refer to are mostly free PDFs from the internet on various topics, usually affiliated with some academic institution (mostly universities).
One can then only guess the publication date from the dates of newest references. Does this weird trend have some reasoning? If I wrote a paper or even its draft and made it public I would clearly date it.
**Edit:**
It seems I have missed the option of checking the PDF document properties which show a creation date. But this still does not answer the question of not including the date within the body of the document itself.<issue_comment>username_1: Most (CS) papers that you find on the websites of the publishers have the bibliographical information in the PDF, typically on the first page of the paper. This typically includes the year. These papers are however also typically behind paywalls, meaning that you won't get access to these unless you (a) buy a copy of the article, or (b) have access through your institutional subscription.
Many research paper PDFs that you find freely on the internet are *self-archived versions of the papers*. These are PDFs provided by the authors on their personal or institutional web pages and not prepared by the publisher. While these do not constitute the *official* versions of the papers, authors normally do not modify the content of the PDF so that the official and unofficial versions of the papers get out of sync. Now it happens to be that most paper style files provided to the authors for writing their papers with for a specific venue do not feature a field for the bibliographical information. Rather, the information is later added by the publisher. Thus, the information is missing on the PDF made by the authors themselves.
It should also be noted that only few people see this as a problem. The bibliographical information is contained on the authors' webpages from which you often download the papers. Also, you mainly need the bibliographical information for citing the paper, and for that, you can pretty much always download the whole bibliographical information entry for a paper from the publisher's website. Just type the paper title into your favorite search machine and click onto the respective result. For computer science, most papers are in DBLP anyway, which also gives you a complete bibliography entry at the expense of a mouse click.
Upvotes: 4 <issue_comment>username_2: *"The papers I refer to are mostly free PDFs from the internet on various topics"* – then it's most likely one or both of:
* It is not a peer-reviewed paper published by a reputable journal. Be careful what you read, there are plenty of quack theories out there
* You are viewing a preprint, and by finding the article on the journal's website you will find the date. Also, the final version of the article will probably be better formatted, and may be better written and have mistakes corrected (this depends upon the stage of publication at which the preprint was circulated).
Many preprints are circulated prior to publication, so you may not find a better version yet. It's still really poor form for an author not to date a preprint. If a preprint is a few years old and has not yet been published, you have to wonder *why*.
I can not recall ever seeing a journal that does not put the journal name, date, and issue number on at least the first page. It would be very poor practice not to do this, because taken on its own it is impossible (without further research) to tell where the article came from. It's common even to date *every* page with the journal name, date, and issue number.
I suspect that if you're seeing this trend in most of the "papers" you read, that you really aren't reading research articles written by legitimate researchers. Ask your advisor or lecturer for the names of the most relevant journals in the field, and start from there.
Upvotes: 5 <issue_comment>username_3: I agree that it is extremely frustrating to not have a date on a technical paper. It is often impossible to tell the currency and hence whether any conclusions represent the latest thinking. The only reason I can think of is that most technical papers are published by technical associations, who then generate revenue by selling access to papers. By not including a date, researchers are unable to identify currency and are thereby forced to purchase more papers than they need, hence generating more revenue for the publishing house or society.
Upvotes: 0
|
2015/05/05
| 473
| 2,017
|
<issue_start>username_0: If one gets a grant, and by some reason (e.g., administration problems with the university) the project needs to be ended before the grant is over. Should one list such grant in the CV? or just let it go into oblivion?
If put it into the CV is OK, then should one state that the grant ended before its time? or even state the reasons?<issue_comment>username_1: The main reason to include a grant on a CV is to show that you can get a grant, which you did. Whether or not you made productive use of the grant is generally measured from the publications arising from the grant. You can provide explanations, if necessary, in accompanying letters -- a CV does not contain explanations, it just lists the essential facts.
Upvotes: 3 <issue_comment>username_2: To add to @username_1's great answer, I would further say that in what I've seen CVs just list the basics of the grant: funding agency, agency grant ID, title, date range, amount, and the role of the CV author in the grant, i.e. PI or co-PI. Often, people also list that they were part of the writing team of a large-scale or important grant even if they were not among the PIs in order to emphasize significant participation where agency regulations do not allow them to be otherwise included\*. I rarely see the papers coming from a grant tied to that grant's listing in the CV or vice-versa. Each of the papers will individually acknowledge the funding source, but they are usually not tied together in the CV.
In your situation, I would list the grant with the amount that was awarded (not spent) and the date range that it was active with no further explanation unless it was killed by the funding agency. If it was, you might offer an explanation if you can keep it short. Otherwise, if someone asks about a grant performance period that looks short, you can wait until then to explain it.
\*: NSF allows 5 people to be listed as PI/co-PI, but several of our grants have included more significant writers than that.
Upvotes: 2
|
2015/05/05
| 804
| 3,322
|
<issue_start>username_0: I have published papers in the field of mechanical engineering, combustion, engines, etc. Previously I relied on MATLAB for most of my data processing, and C++/Fortran for computations. I duly cite languages used.
Recently I switched to Python for its great comprehensive library, plotting capabilities, support and above all I don't have to struggle with the licensing issues. Now I am worried if citing Python would affect paper acceptance, since it is based on an unconventional approach.
Will using an unconventional programming language increase my chances of rejection?<issue_comment>username_1: Any answer will likely depend on your field and the specific journal or conference you submit to. Programming languages are *tools*, just like your literature database frontend. As long as your tools are not manifestly unsuitable to the task or to the venue you submit to, I can't imagine anyone holding the tool against you. If you write your high performance computation in COBOL, I'd say this is a case where the reviewer might question your grasp of the field.
This, of course, does not hold if you submit to a journal or conference that explicitly addresses a particular programming language or paradigm. If you submit a paper that exclusively relies on [Haskell](https://www.haskell.org/) to the [R Journal](http://journal.r-project.org/), you likely will be rejected.
(And Python specifically is sufficiently hip nowadays that I don't think it will raise an eyebrow, except for possible performance problems, as per [@aeismail's comment](https://academia.stackexchange.com/questions/44872/does-the-choice-of-programming-language-affect-paper-acceptance#comment101199_44872)).
Upvotes: 3 <issue_comment>username_2: The short answer is, no. I've never experienced or heard of a reviewer caring about what language was used for code in a science or engineering paper.
In any case, I don't think Python is "unconventional" in 2015. Here are some well-known and widely used codes that can be used for CFD with Python front-ends:
1. <http://pyfr.org>
2. <http://fenicsproject.org>
3. <http://github.com/clawpack/pyclaw>
Note that all of these use Python in combination with lower-level languages for performance.
I'll also mention the educational course [CFD Python](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes).
Upvotes: 3 <issue_comment>username_3: One case when the language may be questioned is when you are doing performance profiling and compare results with that others have achieved. For such cases, you usually get into least trouble when using that the majority uses. To compare the performance, the algorithms must be implemented on comparable platforms, not Python vs Assembler.
Upvotes: 2 <issue_comment>username_4: Present it as a proof-of-concept code and you ought to be okay. If you believe it's beyond that, then use the standard languages for your field. Nothing wrong with Python, our computational physics friends use it and develop impressive libraries for it. (It's interesting to see comments from a few years ago when Python was the new kid. By contrast Python 3 pretty much dominates some areas now.) The goal isn't always performance on a particular problem, sometimes the mark of great code is the ease of adoption by others.
Upvotes: 0
|
2015/05/05
| 695
| 3,139
|
<issue_start>username_0: When quoting the number of citations for each paper you have published, different sources can be used, Google Scholar, ResearcherID, Scopus.
Google Scholar covers a larger range of literature.
Is it acceptable to use Google Scholar? Or it is not a professional resource, and we necessarily should use ISI Web of Knowledge for counting citations?<issue_comment>username_1: None of the citation databases are particularly good. Google Scholar tends to err on the side of inclusiveness (thereby over-representing impact), while the more curated databases (e.g., ISI WoK) tend to err on the side of exclusiveness (thereby under-representing impact). This is particularly acute in some fields: computer science, for example, is notoriously under-represented in the curated databases to the degree that it has is own independent citation database, [DBLP](http://dblp.uni-trier.de/), which of course has its own different problems.
You can potentially use any of these databases reasonably to report citations in a reasonable and professional manner, as long as you are consistent and make it clear which database you are using, such that readers can adjust their expectations accordingly.
Upvotes: 4 <issue_comment>username_2: Google Scholar is actually quite useful, insofar as it draws a broad net for publications, whereas many other citation databases fail to capture important papers. Unfortunately, Google Scholar also has some problems with double-counting, since it often treats different versions of the same articles as if they were different publications, which can artificially inflate citation count. A secondary problem with Google Scholar (which also applies to other citation databases) is that it gives "raw" citation counts (and derived metrics like H-index, etc.) rather than "author adjusted" metrics. This also artificially inflates citation count, especially for authors who do most of their work with a substantial number of co-authors.
One problem with Google Scholar is that its broad sources for papers means that it can be subject to manipulation by publishing mock papers that give no scholarly contribution but give citations to other works or to each other (see e.g., [López-Cózar et al 2012](https://arxiv.org/abs/1212.0638)). The more tightly curated citation databases do not have this problem since they only count publications in approved sources; typically these sources are established scholarly journals with effective peer review processes.
As the other answer here points out, none of the citation databases are very good. They all suffer from under- or over-coverage and most do not make a serious attempt to adjust metrics for authorship. Personally, I think Google Scholar is less bad than many of the others, since it at least captures all the important papers a person has written, rather than artificially excluding those that fall outside a narrow range of journals. However, you should be aware of its drawbacks in relation to other citation databases and ensure that you carefully inspect the individual publications when using it as an evaluative source.
Upvotes: 2
|
2015/05/05
| 396
| 1,637
|
<issue_start>username_0: I am event manager in our university. I see that academics are very mobile through visiting scholar plans. I wonder if you have every seen mobility of university administrators?
I am very interested to go and voluntarily work in the event office of another university to earn new experiences.
Before contacting some universities to see if they have such possibility, I wanted to hear from people here, if it is a feasible and common idea or ridiculous one.<issue_comment>username_1: I have heard of temporary administration positions being created to fill a special need of the university, but I've never really heard them referred to as visiting. This probably more likely to be well received if you have some particular experience or skillset that would be useful to a university.
Upvotes: 2 <issue_comment>username_2: The term you might be looking for is **secondment**.
To quote one example university ([Macquarie University policy](http://mq.edu.au/policy/docs/secondment/guideline.html)):
>
> **Secondment:** an arrangement made with the mutual consent of the
> supervisor and staff member, where a staff member is released under
> specific agreed arrangements to work in another area within the
> University or with another organisation for a specific period of time.
>
>
> A secondment arrangement may be made in the following circumstances:
>
>
> * within the University (internal secondment)
> * to an external organisation (external secondment)
> * from an external organisation (external secondment)
>
>
>
Thus, you may want to search your university for a similar policy.
Upvotes: 3
|
2015/05/05
| 1,188
| 5,133
|
<issue_start>username_0: Graduate students (at least in the US) often have to take foreign language competency exams, particularly if they are in the humanities. In things like classics or Biblical studies, I might expect a higher standard. **But what is the standard generally required of students outside of such areas (but still within the humanities) in the US?**
Since these are PhD programmes, I would normally expect a high standard (e.g. at least beyond the second year level in a good undergraduate programme). But purely anecdotally, I've reason to doubt this:
* I know one person who was a native Japanese speaker and who used Japanese and German (or possibly English, via some exemption for foreign ESL speakers) to meet her language requirements, but who claims that her German isn't very good.
* I know of a first-year student in art history who is using Italian and Chinese to meet such requirements. His comments about his Italian revision and experience suggested that he was at a fairly low level,1 and his attempts to write or converse in Chinese were, speaking as a native Chinese speaker, not particularly competent. Apparently he had to take his Italian exam shortly upon entering grad school.
While perhaps I'm just being too presumptuous of other people, this seems to suggest that the idea I have of language competency exams requiring people to comfortably read something like untranslated Foucault easily is wrong.
---
1. From his comments, it seemed that I know at least as much French as he does Italian, yet I have trouble reading French scholarly texts with confidence.<issue_comment>username_1: I have a comment, not an answer, but I don't think it will fit in the little bitty space very well.
I'm not qualified to write an answer because I never studied humanities.
>
> I know one person who was a native Japanese speaker and who used Japanese and German (or possibly English, via some exemption for foreign ESL speakers) to meet her language requirements, but who claims that her German isn't very good.
>
>
>
There are a couple of possible explanations of this. Perhaps her humility makes her say her German isn't very good. Perhaps she is comparing her German with her English, and her German is not as strong as her English.
If she were doing a PhD in Germany, do you think her English is strong enough that she could use it to pass a foreign language competency exam? If so, then I think her department did the right thing by declaring her competent.
>
> I know of a first-year student in art history who is using Italian and Chinese to meet such requirements. His comments about his Italian revision and experience suggested that he was at a fairly low level, and his attempts to write or converse in Chinese were, speaking as a native Chinese speaker, not particularly competent. Apparently he had to take his Italian exam shortly upon entering grad school.... From his comments, it seemed that I know at least as much French as he does Italian, yet I have trouble reading French scholarly texts with confidence.
>
>
>
I don't speak Italian, but I speak Spanish and French. I have noticed that Spanish is often more straightforward than French. I suspect that Italian is pretty straightforward too.
Keep in mind the context -- the U.S. is such an overwhelmingly unilingual place and culture.
>
> seems to suggest that the idea I have of language competency exams requiring people to comfortably read something like untranslated Foucault easily is wrong
>
>
>
My guess is that your idea *is* wrong. (I wish it weren't.)
Upvotes: 0 <issue_comment>username_2: I'll give a sketch of an answer, from the perspective of someone at a top 10 university in the U.S.
Recently, I had a consultation about potentially doing German coursework. It was also suggested I take a course for learning to *read* German that was intended for graduate students who needed to meet their reading competency requirements, particularly since I had placed (however accurately) into second-year German. **The reading course presumes no initial German knowledge of its students. Moreover, the course only takes up the time of a single term.** I would be thus inclined to say that the reading exams cannot be possibly so difficult, but I have a lot of trouble reading samples for the German one without more knowledge.
Since the quality of my French is less suspect, I'll use that as an example. **Looking at old reading exams for French at my university, I would probably be able to pass (or at least not fail abysmally), and I probably have at least the knowledge of almost two year's of college-level French.** But what really helps is that I have a good chunk of knowledge about grammatical forms (so I can look at a verb and guess its infinitive easily and more experience with vocabulary than I do in German.
**So perhaps the language competency exams, at least when based on more humanities-oriented texts, really aren't *that* difficult, but only in the sense that a student who knows a lot of grammar and has some good experience of vocabulary should be able to survive.**
Upvotes: 2
|
2015/05/06
| 1,068
| 4,427
|
<issue_start>username_0: This question is related to one of my previous questions:
[Changing University in First year of Phd](https://academia.stackexchange.com/questions/44780/changing-university-in-first-year-of-phd)
I just completed my senior year. I have been accepted to a grad school (say University X).
I want to reapply for some PhD programs next year (fall). But, at the same time, I don't want to lose the only PhD seat I've got.
So, I was thinking about deferring the admission to University X in order to apply to few other universities for next fall.
I have 2 questions regarding this:
1. Should I mention about my deferred admission to Uni X in the Phd applications for next fall ? If I don't, Would it be treated as academic cheating ?
2. How much deferring should be enough ? (Next Spring, Next Summer or Next Fall)<issue_comment>username_1: Deferring admission is not an automatic privilege at most universities at the doctoral level. You will typically need to justify why you want the deferral, and explain what you would do with the time. For instance, a Fulbright fellowship or a "service payback" on a fellowship might be valid justifications to a deferral; applying to other grad schools most certainly is *not*. If you are found to be deferring at one school to apply to another, you may lose out on admission to both, as the first school may retract their offer, and the second would likely not want to accept someone who might try and hold out on them.
Upvotes: 3 <issue_comment>username_2: My understanding of a deferral is a bit different. It is more like getting an early offer of admission for the following year. Thus if you defer for a year you are not obligated to enroll in the program the following year. Well really you are never obligated to enroll in a program until you sign paperwork to that effect -- which, for many PhD programs, takes place when the student actually enrolls in the program. I also think that the most common reason for deferral is the OP's: that the student is just not fully committed to the PhD program she has been admitted to, and she hopes that the intervening year will clarify whether or not she should enroll. I think that a student *should* be pursuing other options during that year...assuming that the student and the program are on the same page about this.
As others have said, of course there is nothing like a *right* of deferral: if the application was not solidly strong then presumably the answer will be "No" or "Not without a good, specific reason" (e.g. health or visa issues). But I think that in many cases, an admissions committee can look at an application and say -- sure, we are confident that we would admit the student next year if they submitted the same application. By telling the student that now, we make their eventual enrollment in our program the path of least resistance.
Anyway, what's for sure is that in order to defer admission you need to have a serious conversation with the faculty of the program in order to make sure that you both understand each other and your commitments. In a comment on a previous answer I wrote that without mention to the contrary the understanding of deferral should be as in the first paragraph. Especially in light of Prof. Ismail's answer I now think that was a mistake. Sorry for giving bad advice in that regard.
Upvotes: 2 <issue_comment>username_3: @zzz, you appear to be feeling rather stuck. You've been accepted to a school you don't feel a real commitment to. Do you attend and then transfer? Do you tell them you want to wait a year, and hope you catch a better fish? Do you just forget all about this school that accepted you, and spend a year feeling anxious? None of these solutions seem to fit very well, so you go around and around.
I am going to suggest that you consider the following:
1. forget all about this school that you feel so lukewarm (or even
doubtful) about;
2. take this year to do something worthwhile. Here are some examples:
* enroll in a one-year master's program in a closely related field
* take some undergraduate courses at the same institution you are about to graduate from (in a related field)
* volunteer in an organization you deeply admire
* get a job and build up some savings
* audit some graduate level courses at a university you have a high opinion of (auditing costs a fraction of what it costs to enroll for credit)
Upvotes: 1
|
2015/05/06
| 1,590
| 6,799
|
<issue_start>username_0: I do *not* ask about special cases, e.g. supervisor does not approve the thesis, there is rivalry between the supervisor and the examiners and so on.
Is there any general standard for viva, e.g. one needs to be able to answer 2/3 of the questions, etc.
I'm waiting for my viva, and I'm only worried about questions about related work. For example, my thesis only addresses a problem on deterministic programs, do I need to know all approaches to the same problem on probabilistic programs? (at the moment I don't).
---
UPDATE
------
I would like to emphasize again that I do not ask about special cases, e.g. plagiarism, wrong methodology etc. I believe theses cases are extremely rare. I just want to ask about a normal case where one managed to publish some papers, and advisor (happily) approved the thesis.<issue_comment>username_1: No, there are no general worldwide standards, other than "the examiners should be satisfied".
Your advisor is probably the best person to answer questions like this for your specific case.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I assume that the aim of a vivia is generally 'to establish whether the aims of the PhD program have been achieved', although I guess some will be more restricted than that (with another grouping checking the overall picture).
Therefore generally there won't be a specific question-proportion to pass, since there's no set list of questions. You need to demonstrate that you wrote your thesis, that it is right (well, close enough, since there will usually be some mistakes somewhere), and that it is of a suitable standard. You probably also need to demonstrate that you've engaged with the professional development part of the program (as a PhD student you are essentially an apprentice professor/researcher). You should know stuff about your field other than your immediate thesis question, you should know how the research community works, and you should be able to communicate your ideas (in writing, in a planned presentation, and in answering questions). Having some realistic plans for the future might also not be a bad idea.
Upvotes: 3 <issue_comment>username_3: Further to Nate and Jessica's answers, you can get an idea as to what the risks are in your institution and your subject area by asking around for examples of people who have failed their viva.
'Failing' is also relative: the 'do not darken our doors, no option to resubmit' outcome is only likely to be a risk in case of plagiarism or [moral turpitude](https://academia.stackexchange.com/a/30951/24914). The difference between acceptance pending corrections (regarded as a 'pass') and asking for a 2nd viva (often regarded as a 'fail') may not be huge in terms of the amount of effort on your part that is required to satisfy your examiners.
Upvotes: 2 <issue_comment>username_4: As a former PhD student, now lecturer, it's very rare to hear of one failing a PhD viva. The most common outcome is to grant the PhD subject to minor corrections, which will be checked off by the internal examiner. Occasionally candidates will have to make major corrections for review by the external examiner, and much more rarely, candidates will be told to revise and resubmit where the process may or may not include an additional oral examination. Only the last outcome would be generally regarded as "failure".
In the first instance the student's advisor would not recommend/allow (depending on the institution) the candidate to go forward for a viva unless the work was to the standard required. In many universities, this means publishable in whole or in part, and making a non-trivial novel contribution. One way to satisfy yourself about these criteria is whether you've published anything at all to date: if you have, it answers the question that the thesis is publishable at least in part. It's very hard to "fail" (see above) if these basic criteria are met.
About related approaches, you would probably be expected to have a rationale for choosing your own approach. This would imply that you know your chosen method well, and know enough about the others to be able to make a comparative choice. E.g. if you were working in electromagnetics and chose the Finite Element Method to solve a Partial Differential Equation, you would probably want to be able to point out why you rejected Finite Differences and/or Analytical methods. But I would not expect you to be able to discuss the intricacies of Finite Differences in any great detail. The working knowledge you already have of your own approach and that of others is probably quite sufficient, as long as you can provide a strong justification for why you chose the methods you did. It's quite OK to justify based on ease of use, local availability, expediency etc.
Upvotes: 3 <issue_comment>username_5: The following are some reasons that come to mind that might justify a failure in the viva:
For the thesis:
* evidence of academic malpractice (plagiarism, etc.)
* *fundamental* methodological flaws, such as a poorly chosen method or a misapplied method that calls into question the scientific validity of the thesis. My guess is that this is probably the most common reason for failure: that the work is just generally not a good piece of science and needs further development before it can be considered convincing.
* a *significant* failure to engage with preexisting literature, which casts into doubt the significance and originality of the thesis' contribution to knowledge.
* a general lack of academic substance such that the thesis is not of sufficient scope or novelty to merit the award of a PhD.
For the viva itself:
* a general inability to answer questions about the thesis—to a such a degree that the examiners are led to question whether it is really the candidate's own work.
* a *significant* deficiency in the candidate's knowledge of the literature, such that s/he cannot confidently be held to understand the relationship between his/her work and that literature.
* a *significant* inability to justify the decisions made in the course of the PhD, calling into question the candidate's ability to construct a successful research project. For example, you should be able to say why you chose the approach you did, rather than some alternative.
* repeatedly giving answers that are so severely wrong that they call into question the candidate's competency in the field.
---
The use of the words fundamental, significant, etc. in the above is quite deliberate. Examiners will forgive basic problems with the thesis or a somewhat flakey viva performance because they understand that passing the viva marks the start (rather than the end) of one's education. The option of minor corrections exists to deal with such issues.
Upvotes: 4
|
2015/05/06
| 1,364
| 6,228
|
<issue_start>username_0: I got a denial to a PhD application I sent. The mail basically stated that they had picked another candidate and wished me luck.
I am doing my master in a German university and I applied to another department in the same university
Is it usual to ask the reasons of denial?. Would it even make sense?. If it is something like e.g. low GPA, there is pretty much nothing I can do to change that. If however it is something like "your motivation letter was too long" or something like that I guess it is salvageable.<issue_comment>username_1: The institution may have a feedback process that you can apply to. In many cases there are just too many applications to be able to offer or give feedback on. Depending on the country you live in you might also be able to use Freedom of Information legislation, but the process can be quite onerous. Could you get a friend in a graduate program to have a read of your application and give you some recommendations?
Upvotes: 2 <issue_comment>username_2: Our department receives requests for feedback from rejected candidates fairly regularly, so yes it is quite typical. However, my university operates an institution-wide policy that no feedback will be given to unsuccessful applicants. I appreciate that this is not helpful, but (1) providing feedback to the (many) unsuccessful candidates would impose a large additional administrative burden, and (2) some candidates who bear a grudge would invariably use any feedback provided in legal proceedings against their rejection.
So, in short: it can't hurt to ask, but be prepared for the distinct possibility that your request might be refused. In the first instance, it is advisable to write to an admissions administrator rather than a faculty member.
Upvotes: 4 <issue_comment>username_3: You can certainly try asking, but I expect you won't get a useful answer. In the cases I'm familiar with (mathematics departments in the U.S.), you aren't likely to get any official answer beyond "no comment". This isn't because the department is being unhelpful or deliberately withholding information they could reasonably share. Instead, it's because offering useful feedback is really hard.
Rejected applicants sometimes imagine that the rejection may have been due to some identifiable flaw they could fix, but in my experience this is rarely the case. Most rejections are based on an overall evaluation, rather than isolated flaws. Even if a single problem derailed the application, there might have been other issues that were never identified because discussion stopped when the application died. Nobody wastes times on hypotheticals like "Would the candidate have been admitted if it weren't for this problem?", since the committee is eager to move on to the more contentious cases. So even if the committee can point to a specific problem, they generally can't predict that fixing it would lead to admission. It's not particularly useful or encouraging to be told "Here's a serious problem with your application, but don't assume you'll get in if you fix it. We haven't even considered that question, so who knows?" (Plus there are legal issues: you don't want to be sued by someone who is told their application was rejected for a specific reason and then discovers that you admitted another applicant with the same issue. You might have had an excellent reason - perhaps the rest of the other application was magnificent - but it could still look like a pretext for discrimination. It's safest not to offer any explanation that oversimplifies the full decision making process.)
These cases with isolated flaws are really not typical. Instead, most feedback would fall into two categories: "Your application just wasn't good enough overall for us to even consider you" or "Your application was good and you made the short list, but you were outcompeted by people with even stronger applications". The first sort of feedback adds insult to injury, and it's wise to avoid making enemies among the applicants you reject. The second sort of feedback is also not constructive, since there's usually no good way to convey to the applicant how the final comparison was made. The evaluation process consisted of multiple readers, perhaps with differing opinions, followed by a committee discussion that compared the application with the competition. Confidential letters of recommendation played a major role, and in any case the process was lengthy and involved enough that it's difficult to summarize it usefully.
For comparison, U.S. National Science Foundation review panels provide applicants with copies of the written reviews and ratings of their proposals, together with a brief summary of the panel discussion. This is about as much transparency as one could reasonably hope for, but it rarely leaves rejected applicants with a clear idea of how to improve their future proposals. In many cases they don't even have a good understanding of why they didn't make it (beyond "some other proposals were even more compelling").
Upvotes: 6 [selected_answer]<issue_comment>username_4: If you're applying for a "free-standing" PhD position, then it is very unlikely that you will receive any feedback whatsoever, as such searches are tantamount to job searches. In general, candidates are not given feedback for why they are turned down for such positions, and I would expect such policies to apply to PhD candidates as well.
Upvotes: 1 <issue_comment>username_5: Most departments only have a few spots available with quite a bit of competition; in the majority of rejections, the answer is just that you weren't in their top *n* applications, not that you did something wrong.
Upvotes: 2 <issue_comment>username_6: I once got rejected from a with-a-stipend Masters' program (not exactly the same, granted), without a clear reason being given. Eventually I "heard it through the grapevine" that the department ran out of money on start-of-year stipendiaries and couldn't afford accepting the mid-years - but nobody would ever tell me that on paper (= officially).
So, even if they would be willing to let you know, they might not be willing to let you know officially. Sniffing might be in order...
Upvotes: 1
|
2015/05/06
| 438
| 2,015
|
<issue_start>username_0: I've read that it's quite common among students to take a graduate program which is different from the bachelor's degree. I was wondering if it is possible for someone with a bachelor's degree in Political Science and Master of Business Administration to teach Political Science subjects.<issue_comment>username_1: It is possible for people to teach across disciplines if their record reflects knowledge in the course they will teach. With a BS and PhD in Aerospace Engineering I have taught courses in CS and a division of Statistics and Scientific Computation. I was eligible to do this because my background is in parallel computing for computational fluid dynamics problems, so I have demonstrated skills in programming, software engineering, numerical analysis, and parallel computing. The courses I was teaching were on scientific computing and parallel computing and were being offered first by a CS department and then by this division (which has now become a department). I was deemed qualified to teach them based on my background and demonstrated skills even though my PhD is in a different subject.
So yes, it's possible, but I think you'd have to show some background related to the subject in question. It will also depend on what kind of job you are applying for. If a Poli Sci department needs an adjunct for one course, you might qualify to teach that course, but you are unlikely to land a full-time position unless your Doctor of Business Administration research was related to politics somehow.
Upvotes: 1 <issue_comment>username_2: Highly unlikely. Even at the community college level, to teach in a field one must typically have at least some postgraduate degree in that field or in a closely related field. So you someone probably teach in a polisci department with a degree in economics, because they use many of the same methods and consider some of the same areas. Ditto philosophy. Not a professional business degree.
(PhD political scientist here)
Upvotes: 2
|
2015/05/06
| 1,204
| 5,458
|
<issue_start>username_0: A while ago, we read a paper published in a quite credible journal (impact factor around 3), which we believed had a few very basic but significant mistakes in the methods and final results.
The journal's guide for authors mentioned the possibility to submit a discussion ("Discussion: A short commentary (1000-3000 words) discussing an article previously published in XXX."). Together with a colleague, we decided to write a discussion manuscript highlighting the problems of the paper in two pages. We did it twice, actually—one time very polite, and a second time more to the point.
Although each time all of the reviewers agreed on the highlighted problems and one reviewer even recommended publication "as is," both times it was ultimately rejected, because the editor focused on a comment of the reviewer like: "was not general enough", or because "it is more a discussion, and not an original research contribution." (But this was actually exactly what was intended, and fully in agreement with the objective for this type of manuscript in the guide for authors.)
It is OK if a manuscript is rejected (I am a PhD student because I like to learn new things), but it should be rejected for the right reasons. The arguments against the article did not seem to consider properly the submission type, "discussion" instead of "original research article." Furthermore we have the feeling that the editor could be embarrassed to publish a discussion highlighting a paper with such errors that could be recognized by frankly any above-average high school student. (although the paper has been cited quite a number of times, apparently without anyone noticing the mistakes).
It is probably not worth the effort, but it became a matter of principle.
We contacted a few other people (not our friends) in the field for a quick opinion about the manuscript, and they confirmed our impression that it has probably been rejected out of embarrassment. This seems quite a disgrace, but what can one do about it?
Should we publish the manuscript together with the reviewer comments on our group website? Submit the manuscript to a competing journal? Would it make sense to contact the publisher (Elsevier) to complain about the editor in chief?
I should also mention that the writers of the original article are, according to their group website, co-sponsored by a big company which gains obvious advantages from their erroneous findings. Is there maybe an ombudsman to which we could go to?<issue_comment>username_1: I can understand the editor's decision not to publish your discussion if the mistakes highlighted should have been identified by the reviewers.
However, it does seem out of context to send the discussion of a paper published in a said journal to a different journal of which the first one was published in.
As for a solution, in your stead I would not really know what to do either. If you have a relatively significant connection to authors in your domain via social media (e.g. Twitter, ResearchGate), you could consider uploading your manuscript there.
Upvotes: 1 <issue_comment>username_2: The first step to take, if not already done, is to *answer back to the Editor in Chief*, politely explaining that the status of your submission (discussion letter) seems to have been overlooked. Stress again, politely, that the policy of the journal allow explicitly such letters, and that it would be better for everyone that they either enforce this policy (and not reject letters on the ground that they are not full research articles) or remove it from the guide to the authors. Given the answer you get, there are several possible follow-ups if your letter keeps being rejected.
You can *contact the authors* and see what they have to say. In fact, it would be something to be done before any of the propositions below. They may acknowledge the mistake and publish an erratum by themselves, acknowledging you, and this would make things right in the best way. If they do not answer in a satisfactory way, at least they will have been warned and your case will be stronger.
You can *appeal to an ethics committee* on your field, if one exists, disclosing both your letter explaining the error, the written exchanges you have had with the journal, and the conflict of interest you spotted for the authors of the original paper. Do not make assumption, just present the fact and let the committee judge for itself.
You can *try to publish your letter in another journal*, in order to make the official record straight. Depending on the existing venues, you may have to add some flesh to your letter and grow it to a full paper, even if short. You are right in your principles, such mistakes should not be let unknown, and a blog post is too personal and too unofficial to make it right. People should get the information on the mistakes you spotted while using the databases usually used in your field.
But, given you are a PhD student, **before you take any step I very strongly advise you to ask your advisor (or another senior researcher you can trust) about it.** Depending on your field, your situation, the stature of the author the work of who you criticize, you could end up in pretty bad situation if you do not beware. I cannot tell from the information you gave, but you also have to protect yourself, and unfortunately this is not always achieved by doing the right thing.
Upvotes: 4 [selected_answer]
|
2015/05/06
| 832
| 3,716
|
<issue_start>username_0: Something that came up the other month, I was asked to sign a declaration that something I'd scribbled, amongst other things, didn't contain the names of any individuals: living, dead or fictional.... without "consent". Which lead me to ask a few questions such as WHY, and how can a fictional individual grant consent to cite a nonexistent work they haven't created. Anyway the WHY is apparently down to "[Right to Publicity](https://www.law.cornell.edu/wex/publicity)" statutes in various US states (There's no direct equivalent in the UK).
Anyway I'm still wondering if similar declarations are standard in all US publishing agreements, and whether there really is an assumed grant / waiver, on having something published, for others to cite it?<issue_comment>username_1: I can understand the editor's decision not to publish your discussion if the mistakes highlighted should have been identified by the reviewers.
However, it does seem out of context to send the discussion of a paper published in a said journal to a different journal of which the first one was published in.
As for a solution, in your stead I would not really know what to do either. If you have a relatively significant connection to authors in your domain via social media (e.g. Twitter, ResearchGate), you could consider uploading your manuscript there.
Upvotes: 1 <issue_comment>username_2: The first step to take, if not already done, is to *answer back to the Editor in Chief*, politely explaining that the status of your submission (discussion letter) seems to have been overlooked. Stress again, politely, that the policy of the journal allow explicitly such letters, and that it would be better for everyone that they either enforce this policy (and not reject letters on the ground that they are not full research articles) or remove it from the guide to the authors. Given the answer you get, there are several possible follow-ups if your letter keeps being rejected.
You can *contact the authors* and see what they have to say. In fact, it would be something to be done before any of the propositions below. They may acknowledge the mistake and publish an erratum by themselves, acknowledging you, and this would make things right in the best way. If they do not answer in a satisfactory way, at least they will have been warned and your case will be stronger.
You can *appeal to an ethics committee* on your field, if one exists, disclosing both your letter explaining the error, the written exchanges you have had with the journal, and the conflict of interest you spotted for the authors of the original paper. Do not make assumption, just present the fact and let the committee judge for itself.
You can *try to publish your letter in another journal*, in order to make the official record straight. Depending on the existing venues, you may have to add some flesh to your letter and grow it to a full paper, even if short. You are right in your principles, such mistakes should not be let unknown, and a blog post is too personal and too unofficial to make it right. People should get the information on the mistakes you spotted while using the databases usually used in your field.
But, given you are a PhD student, **before you take any step I very strongly advise you to ask your advisor (or another senior researcher you can trust) about it.** Depending on your field, your situation, the stature of the author the work of who you criticize, you could end up in pretty bad situation if you do not beware. I cannot tell from the information you gave, but you also have to protect yourself, and unfortunately this is not always achieved by doing the right thing.
Upvotes: 4 [selected_answer]
|
2015/05/06
| 3,324
| 13,712
|
<issue_start>username_0: If you do work that requires a technical skillset (ex. programming, data science) and plan to work in the private sector. Is completing the Ph.D. degree a disadvantage in terms of what opportunities are available to you? Or, do the additional publications, work completed, and everything else that goes into a dissertation count as valuable experience? Is the degree viewed as valuable in and of itself? Additionally, is any increase in pay or job stability enough to offset the opportunity cost of making a graduate student stipend for 2-3 years?
I've wound up in a situation where I'll probably be financially unable to take a postdoc position upon graduation, and will likely be forced into the private sector anyway (which functionally closes me off from an academic career-track), so am considering the option of leaving my program after advancing to candidacy.<issue_comment>username_1: As someone with a PhD who did a regular postdoc and now works in the private sector, the answer is definitely probably not...maybe. As with all things, it depends on the job and your field.
I have a PhD in Computer Science. I did research for a couple of years as a postdoc. I enjoyed my postdoc, but a great offer came along for a private sector job. In my field (high performance computing), having a PhD is valuable whether you're in research or industry. In fact, we hire fresh PhDs as well as folks with experience.
That being said, if you were to go to a startup in NYC or Silicon Valley with a PhD in CS, I'd imagine that while you would probably have as much chance of getting the job as anyone else, you probably won't be getting what you might call "reimbursed" for your opportunity costs. The big companies will have research arms where they know what to do with PhDs while the small ones won't.
It all depends on what you want to do. If you want to get into research (or get back into it eventually), having a PhD will be a must, even if you take a few years in industry to shore up financially. However, if you don't get your PhD now, the chances of you finishing it later much smaller. There are plenty of people who do it, but if you look around your group right now, you can probably tell me how many you see.
The exception is getting a job where your employer will essentially pay for your PhD (not like an RA position where you make beans). There are some companies or research labs that will allow you to work on your PhD while you work for them, often using a project with your company as a part of your thesis if your interests align with those of your employers. You might be able to find a position like that.
Upvotes: 6 [selected_answer]<issue_comment>username_2: A Ph.D. means that you are suitable for *different* work than if you did not have a Ph.D.
Generally, it means you are well suited for jobs requiring initiative and creativity, and poorly suited for jobs that require reliable and precise performance. This is because a Ph.D. program trains you to want to take things apart, understand how they work, and improve the situation. This is good for creative jobs and bad for jobs where you just need to follow a procedure reliably.
This is great for some types of industrial work, such as R&D, product development, consulting, etc. It is terrible for others, like being a line programmer working on little modules in a gigantic code base. Smart employers know this and hire accordingly.
Upvotes: 4 <issue_comment>username_3: My experience was that a PhD made it more difficult to get a job because I was frequently told that I was "over-qualified". My PhD was in pure mathematics (in particular, no programming was involved.) However, I have friends who did PhDs in CS and Electricial Engineering, and their PhDs helped them to get jobs. So I think that if you are working in a more practical field, it probably helps but if you are doing the kind of PhD that only really lends itself to an academic career, e.g. humanities, it can hurt.
One thing I tried to do was re-train as an actuary, but I think that once you have a PhD, you are no longer seen as a blank slate and people do not think you are capable of working in a relatively menial role. Like the mouse in the fable, you are perceived as having cut down your options and doomed yourself to follow the chosen course.
For the work experience question, none of the employers to whom I applied viewed my PhD studies as work experience. Again, this might be different in a technical field like CS or Biology or Engineering, in which you might have produced something other than a thesis during your PhD, or been involved in commercial activity of some kind.
Upvotes: 4 <issue_comment>username_4: I am obtaining a second graduate degree, and yes, over qualification is a major problem for technical careers..the hiring person may in fact have less education than you, and this scares people from hiring you...I would avoid a PhD if you want a working career in a technical field..only get a PhD if you want to teach at university/do research.
Upvotes: 1 <issue_comment>username_5: Certainly it depends on the field *and the location*. Comp Sci, Sciences, Engineering, Statistics and similar PhDs are in outrageous demand in Silicon Valley. In my experience in e.g. Washington DC you'll run into some latent reverse snobbery. Maybe other places as well.
The opportunity cost, however is probably the bigger consideration. You'd make somewhat more in a few years hence with a PhD, but if you're a top performer and just get to work now it's unlikely the credential will make a huge difference in terms of dollars.
Can you do both? Apply for jobs now and see what comes of it while plugging away at school.
Upvotes: 1 <issue_comment>username_6: I worked as a quant in a bank for a while. It is very hard even to get an interview if you don't have a PhD in a quantitative subject. The worst signal on a cv was an unfinished PhD since it marked you as a non-completer finisher. The PhD in this case definitely increased future salary prospects.
Upvotes: 3 <issue_comment>username_7: I run a career website - Tapwage.com.
It really depends on the type of job you are looking for. The vast majority of software / technology jobs don't require PhDs and look for education / experience geared towards specific tools and skills. You will be eligible for those types of positions, but you will have to be prepared to address the question on whether you are "over-qualified" and if you your PhD skills are transferable.
That said, we are increasingly seeing a greater demand for PhD candidates in computer science / electrical engineering across the board as companies look to tackle more complex engineering problems like data science and analytics, artificial intelligence and machine learning. Three key categories of industries that are specifically looking for Phd's that you should consider in a job search:
1. Finance companies looking to tackle AI / machine learning and big data. These roles can be really interesting and very financially remunerative (especially relative to academia). A sample of such jobs is collected here:
<http://tapwage.com/channel/artificial-intelligence-meets-financial-intelligence>
2. Niche areas like space technology (spaceX, Virgin Galactic, NASA), automotive tech (Telsa) that are bulking up on technology Phds across the board. A sampling of jobs here:
<http://tapwage.com/channel/space-doctor>
3. Startups looking for PhDs - the startup environment is vibrant and as companies are looking to tackle complex solutions, PhDs are in meaningful demand. These may not pay as much cash compensation as corporate jobs, they do pay more competitively than post-doctoral positions and the equity could be valuable if you have conviction in the idea and the prospects
4. Large technology companies like Google, Facebook, Twitter are all increasingly seeking PhD candidates in areas ranging from natural language processing to digital signal processing.
We feature these types of roles extensively and if need specific guidance, feel free to reach out.
Upvotes: 2 <issue_comment>username_8: In addition to the other answers, the **country** also plays a key role.
If you look at the board of a German company, you will notice that the academic titles are prominent. Example: [Siemens](http://www.siemens.com/about/en/management-structure/management-board.htm). When you get emails from Germany, the signatures often include basic titles (Dipl.-Ing). Another example is [BMW](http://www.bmwgroup.com/e/0_0_www_bmwgroup_com/unternehmen/unternehmensprofil/vorstand/vorstand.html).
Same goes for Poland (mgr. inz - the engineering title is often added, particularly for large and older companies)
In contrast, when looking at the board of [Cisco](http://newsroom.cisco.com/exec-bios;jsessionid=70D7BBF18B1195CB1F01A8B98CBBF44A) you do not see tiles (even though P. Warrior has a PhD for instance). Or [Oracle](http://www.oracle.com/us/corporate/press/Executives/index.html).
Sure, these are only few examples but there are more.
In my initial answer I mentioned France, where it does not hurt to have a PhD, particularly if it is awarded via a *Grande Ecole* (~Ivy League).
From experience, you will have in Europe and Asia, if not an advantage, a head start if you have a PhD.
Upvotes: 2 <issue_comment>username_9: I don't have a PhD but I employ PhDs and work with PhDs.
As others have said, it does depend somewhat on the field and the company. However, in my experience, if you have a PhD in a reasonably related field to the one that you'r applying for then it will probably be a positive if you've received your PhD.
The last bit is important. Unless you are very lucky (or very forward thinking) it is likely to be the skills you acquired/demonstrated to get the PhD (independence, commitment, communication etc) that will be seen to be valuable rather than the papers/conferences/citations etc themselves.
If you don't complete the degree then it'll be an uphill battle to demonstrate that you acquired those skills. Not impossible but you may well struggle to get in front of someone with a sympathetic ear.
On a related note to the last point, (probably not helpful to you and probably not popular with some readers) but the next best thing to completing the PhD will be to drop out early. Saying that you started a PhD but it wasn't for you shows maturity. Hanging on for a few years is distinctly more problematic in my experience.
Upvotes: 3 <issue_comment>username_10: I work as a consultant in industrial automation, so I've had a chance to see a fairly wide range of different corporate cultures. Among those with PhDs on staff, I've noticed a tendency for the PhDs and BS/MS engineers to group up into separate political factions. Amongst the engineers, there is a stereotyped perception that the PhDs (especially fresh PhDs) are oblivious to the practical considerations of building a product.
Many companies talk about wanting to encourage innovation in principle but, in practice, they generally favor lower-risk tried-and-true methods that complete the projects as efficiently as possible, in order to maximize profits. Research is inherently a high-risk endeavor; it appears as a red line item on the company ledger, with a return on investment that is tricky to quantify.
Since PhDs are trained as researchers, I suspect they will often approach projects from a research perspective, in order to study the problem and find good solutions. Engineers are more likely to dig through an existing box of solutions, find the ones that are 'good enough' for the requirements, then design/implement accordingly. There is some crossover of course, especially in the companies with healthy cross-culture dynamics, but this 'gap' does create some challenges.
If I were a hiring manager for a non-research position, I'd generally have a few concerns when interviewing a PhD candidate. a) How much more will they want to be paid? b) How much work experience do they have outside research? c) Will they want to stay in this position or will they move on as soon as a research opportunity pops up?
Basically, if applying for an entry-level technical position in industry, expect to face the same biases as any other 'green' candidate looking for their first job, but amplified by the perception that a PhD is going to want to earn more and possibly won't stick around. If applying for a research position, you'll likely have fewer hurdles to overcome, but I don't have much experience in this area to say for certain.
Upvotes: 3 <issue_comment>username_11: PhD is worthless. Employers will not pay for it. Very very very few recs are written exclusively for PhD's and if they are, they are either in education, meaning you'll be a teacher, or you'll be forced into management. Just because you have a PhD doesn't mean you can manage. I've been in the technical field my entire career and I make the same if not more than my PhD counterparts.
I've seen some employers flipping through resumes and throwing out PhDs because they feel if they hired the PhD the person would expect more money than their counterparts and become unhappy when they don't and then ultimately leave.
An employer looks more for experience. Can this guy make these two machines talk to each other? Can this guy get this thing coded in a month? If this guy writes code will it be good code and modular or am I going to be rewriting it in two years. Can this guy write code that someone else will understand? In case he leaves because he's pissed I won't pay him 15-20K more than his counterparts like he was expecting when he got his paper. Take the paper, burn it.
Upvotes: 0
|
2015/05/06
| 1,183
| 4,436
|
<issue_start>username_0: I received an email from **world biomedical frontiers** journal:
>
> Your recent paper “XXXXXX” (published in “YYYYYY”) has been selected to be featured in our next issue of World Biomedical Frontiers, because of its innovation and potential for significant impact.
> Research results with significant potential to improve health – or to treat or prevent disease – often deserve an immediate leap onto the “front page”. However, scientific breakthroughs don't always make the front page – and some don't make any page! We are the platform for you to stand out from among ~100,000 papers published each month, in order to attract more attention from the public and potential investors.
> World Biomedical Frontiers [ISSN: 2328-0166] focuses on cutting-edge biomedical research from around the globe. Our website receives more than 8,000 visits per month from an international audience of academic and industrial researchers and developers, providing greater opportunity for your results to be recognized and appreciated.
> If you accept our invitation to feature your paper on our website, a $38 processing fee will be charged. We will then post the abstract/summary of your paper in the latest section of Stem cells , with additional information from you highly recommended to further explain your novel findings and concepts in plain language; photos and/or figures are welcomed. Here are two examples (1 and 2 ).
> In order to report breaking publications in a timely fashion, please contact us within two weeks if you wish your paper to be featured in our next issue.
> Sincerely,
>
>
> Editor
>
>
> World Biomedical Frontiers,
>
>
> New York, USA
>
>
> Phone: 1-(917) 426-1571
>
>
> E-mails: <EMAIL>
>
>
> Website: <http://biomedfrontiers.org/>
>
>
>
Should I trust them?
I read [this question](https://academia.stackexchange.com/questions/16996/an-invitation-from-frontiers-frontiers-research-topics) and also [this one](https://academia.stackexchange.com/questions/36790/would-a-legitimate-journal-send-unsolicited-email-to-an-author-offering-to-featu), but I wanted to know about this specific case and what to do about it.<issue_comment>username_1: No; this isn't how reputable venues approach authors. This will have at best zero value (possibly negative value if someone notices, which is unlikely given how little visibility they apparently have), and will cost you $38.
(Its funny that they advertise that they get 8000 website hits per month. This strikes me as an exceedingly low count, given that they probably get a hit from many people they send this spam email to)
Upvotes: 5 <issue_comment>username_2: I suggest that you do not trust them.
Reasons not to trust them:
* Their [home page](http://biomedfrontiers.org/home/) has a bare url lying around which accounts for lack of expertise.
* Their [FAQ page](http://biomedfrontiers.org/faq/) has some serious grammatical errors (capitalization) which again point to the same direction.
* Their [Alexa Rank](http://www.alexa.com/siteinfo/http%3A%2F%2Fbiomedfrontiers.org) has fallen and the stats provided by them do not look reliable according to the Alexa page.
* Who provides site hits in an invitation? Thats like Google interviewer telling you Google's gross profit of the quarter to you before your interview.
Upvotes: 3 <issue_comment>username_3: This is a pretty clear scam. These things are unfortunately common as of late in academia. There are scam journals that try to make money off of things like "processing fees," scam conferences with scam talk invitations, and so on. Unfortunately, they'll just keep on contacting you, and you'll get more of them in the future.
In this case: if a legitimate group wanted to feature your work in this way, they'd just do it on their own. It's very possible they wouldn't even contact you, or wouldn't be contacting you with the choice of whether to have it in their publication or not. And as has been noted in some comments here, 8,000 hits per month is *nothing*; it's surprising that they don't lie and give a reasonable number.
In general, you shouldn't need to pay for things you are invited to; this is often a good way to discern scams from legitimate invitations. Additionally, you can often tell by the lack of effort (eg, form letters), and lack of knowledge about what you actually do (just copying your paper title).
Upvotes: 2
|
2015/05/06
| 1,298
| 5,208
|
<issue_start>username_0: I'm enrolled in an honours philosophy program (taking premed prereqs and maths as electives), going into my fourth year. I had a 3.9 until today when I got my first B. I made a point of kicking the crap out of the final essay; Nevertheless, the TA crushed me on it. I received less than 60%.
I'm applying to med school in the fall. Given the way that the schools that I'm going to apply to weight the grades of applicants, that B is going to be considerably damaging to my application.
There are two phases in the univeristy's process for appealing a grade.
1. Talk to the professor
2. Appeal to the university for an independent evaluation.
To the best of my knowledge, independent reviews are usually kangaroo processes that just confirm the original decision. Fortunately, the university's process isn't the only way to resolve the problem. The TA, the professor, and the department head all could change the result.
I don't suspect the TA will change his opinion.
The professor is good friends with the TA. Generally, people tend to defend their friends when someone accuses them of making a mistake. Ultimately, if I appeal to the professor, no matter how I approach it, the appeal will amount to an accusation that her friend made a mistake. Accordingly, I don't believe that I would succeed if I were to do that. Moreover, if I were to appeal to the professor, she would need to justify her decision, and by doing so, would become convinced her TA got it right.
Accordingly, I see three ways I could go about it:
1. Write the professor (I can't meet with her, I'm out of province this week, and need to contact her within seven days) and hope it works out (I'm pretty confident nothing will happen).
2. Write the professor and, using all the tact I can muster, gently allude to the escalation process, and the fact that it would just be easier to give my essay a fair shake. I saw a lawyer use the 'it's just easier to say yes' approach with a judge once. It worked surprisingly well. Nevertheless, it's kind of a jackass thing to do, and could backfire if my tact fails me.
3. Approach the department head: I'm in his good book. I first got to know him after he emailed me to talk about pursuing work in philosophical research. It was out of the blue, so I figure that's some sign that he'd like to see that happen. (I'd like to help solve some of the conceptual problems that predominate psychiatry.) He and I have spent about ~20 hours working one on one to solve some philosophical problems. I've done well in his classes. So, he knows I generally do good work and he seems to want to ensure things work out well. I suspect that he would doubt that the TA could justify giving me <60% on the essay. Perhaps he might suggest some way to fix the problem, or might offer an alternative if I can't fix it (e.g. 'Try this, and if it doesn't work, come back and talk to me.')
How can I effectively challenge the grade that I've received?<issue_comment>username_1: 4) Write the professor and, in addition to asking that the grading be reviewed, ask whether there's any way you could tackle a piece of extra-credit work to show that the grade does not accurately reflect your mastery of the material.
It's your grade. Own it. Either a mistake was made, in which case a review should fix it unless there is active malice -- which you give no reason to believe --or your own evaluation of you work on the final if flawed and if you want a better grade you need to give some justification for deserving it. Volunteering for the latter may make the former moot.
You may be charged for the summer session if you take this approach. It sounds like you feel that cost would be justified.
Just be glad you don't have your heart set on becoming a veterinarian. From what I've heard, vet school is harder to get into than med school.
*(Valid counter-arguments below. But I'm going to leave this up because I think that's worthwhile discussion.)*
Upvotes: 0 <issue_comment>username_2: I would go through the process set out by your university. Talk to the professor first. You don't have to phrase it as an attack on the TA. Just tell the professor that you feel the grade was too harsh, and you would like him or her to review it. If you feel that the professor doesn't fairly address your concerns, then take it to the department head, and perhaps ultimately to an independent review committee. Typically, in order to get a grade changed by a committee, you will have to show that there was an error in grading your paper or that your paper was not graded consistently with the other papers. If the TA graded everyone's papers harshly, then there's not much of a basis for a complaint. As was pointed out in the comments, if you go to the department head before you talk to the professor, the professor will likely be annoyed at being blindsided by an issue they were unaware of. Also faculty are given broad leeway in grading their classes as long as it is done consistently. Unless there is an egregious mistake and the faculty is uncooperative, it is not the department head's place to interfere, and they most likely will just leave it to the professor.
Upvotes: 3
|
2015/05/07
| 678
| 3,242
|
<issue_start>username_0: Can you point me to resources (e.g. book, articles, etc.) or methods on teaching students how to think more logically/systematically about doing research?
I've seen a number of intelligent graduate students who do great in performing experiments, but struggle to make logical arguments about their research and conclusions. For instance I see students making statements without providing evidence/arguments for such statements (e.g. "increasing the temperature will cause the experiment to proceed better" but no evidence/reasoning is provided). I see writing from students that is an organized mess with no clear path or direction. I often see short confusing descriptions in written drafts when thorough explanations should be given. I see students put together graphs that are incomprehensible. On and on...
It seems that many students don't "get" how to think scientifically or research-oriented. I know this comes with time and experience, but perhaps there is a way to teach students the underlying principles and approach to doing research?<issue_comment>username_1: Of course there's a way to teach them: have them enroll in a research oriented PhD program and practice for five years! An effective approach to research is a skill that takes a long time to develop and will have prerequisites that also take a long time to develop (e.g. how to read scientifically, how to formulate scientific questions, how to verbally express a scientific thought clearly).
That being said, engineering degrees are largely aimed at formalizing the skills that would take a decade or more to acquire through practice. If you can describe a procedure formally in enough detail, maybe, students can just follow the procedure exactly and get acceptable results. I don't know whether this would work or be a good idea for research (BS in Research Engineering anyone?) but it's moot because science programs tend to emphasize content rather than process.
Upvotes: 2 <issue_comment>username_2: From what you described it seems like your students are having trouble with writing clearly and persuasively. If it is possible, you could ask your students to take some classes in humanities or social sciences. Although these classes do not help with the technical skills of your students, they do often require writing skills and critical and close analysis of texts.
As an alternative, you could find some well-written paper in your field, and go over them with your students. Ask the students to pay close attention to the phrasing, structure, and logical flow of the paper. Ask them questions such as "Why did the author review literature on XXX instead of YYY in this paragraph?" or "What potential confound can you think of in this study, and how did the author deal with them and explain their methods?" or "In what order did the author describe their findings?" With enough close reading, your students should get a feel of how good scientific writing is done.
Finally, you can always recommend books on critical thinking and scientific methods to your students. Do a search on Amazon with keywords such as "introductory logic", "critical thinking", "scientific methods", etc. should give you plenty of choices.
Upvotes: 1
|
2015/05/07
| 539
| 2,331
|
<issue_start>username_0: I know that a big part of academic job interviews is for the candidate to evaluate whether they *want* to be in the department (not only whether the department wants them).
Are there some non-obvious general factors that signal a possibly dysfunctional department, or an otherwise undesirable position, that a candidate should be aware of? For example (but not limited to), is a poorly attended job talk, or only meeting a relatively small fraction of the faculty, a warning sign of something awry?
I'm asking this question because I have had two successful interviews and am weighing competing offers. This has left me considering some of the finer aspects of the interview experience.<issue_comment>username_1: Some things to consider while judging the department might be recalling the following things from your interview:
* How enthusiastic the folks were while talking to you. How much of the details about the school, department, colleagues etc. were mentioned? Did the chair mention you would be a good team with Prof. Y because both yours and Y's interests match closely.
* What kind of publications are out from the department members: frequency, quality, collaborative, solo, etc. may give you idea on the work patterns.
* What activities are on: Have they organized seminars, invited-talks, workshops, etc. Who has visited them in the past?
* What are the missions of the department: student success, diversity, faculty success, being in the top 1%, being a research university, etc.
* What kind of answers you got when asked possible questions about your life on campus. How was the tone from different folks: welcoming, hesitant, unsure?
Upvotes: 3 <issue_comment>username_2: One thing to consider is tenure rates. Some departments are much more difficult to gain tenure in than others. My undergrad institution is well known for tenure cases that end in litigation. Similarly, I am aware of a department (small liberal arts college, not MIT or the equivalent) that denied tenure to something like six assistant professors over the course of a decade without granting tenure to any of its hires.
Additionally, it's worth considering whether the department has autonomy over the hiring process. If the meeting with the dean is not largely a formality, this might be an issue.
Upvotes: 4
|
2015/05/07
| 1,638
| 6,447
|
<issue_start>username_0: Why is it customary to call people with doctoral degrees doctors but not people with masters degrees masters? They are both graduate degrees that supersede the undergraduate degree.<issue_comment>username_1: If you're talking about the use of doctor as a title, as in "Dr. Smith", I doubt there's any compelling explanation. Most degrees don't come with titles: nobody say <NAME> or Bachelor Smith or <NAME>. Historically, magister (corresponding to the master's degree) was just as appropriate a Latin title as doctor was, but it simply isn't used in modern English. These titles are nearly gone, with just one remaining. It's probably no coincidence that the last remaining title is also the fanciest, but that's just speculation.
Even for the doctorate, the use of the term "doctor" has degenerated to the point where in English it can only be used as a title, and not as a general noun. If you say "my friend <NAME> is a doctor", practically everyone will assume he's a medical doctor. You could only get away with the more general usage in the narrowest academic context, and even there it would be considered pretentious and archaic.
Upvotes: 5 <issue_comment>username_2: *In some countries the custom is different.*
In the Czech Republic, Europe, where I come from, it still is generally customary to call masters *masters*. The title is different (*magister*, *ingenieur*\*, or *doctor*\*\*) but is more or less equivalent to the American master. And yes, it makes it easier to study for the sole purpose of being called ~~names~~ titles.
In neigbouring Germany, however, only doctors with degree equivalent to PhD. are titled by degree.
---
\*) Not to be confused with *engineer*, the *ingenieur* degree means roughly *master of engineering*.
\*\*) To add to the confusion, *doctor* degree can mean various *degrees* of a degree, not all of them being equal to PhD.
*Update for clarification*
*Magister*, *ingenieur*, and *doctor* are called *magistr*, *inženýr* and *doktor*, respectively, in Czech. The names come from Latin which is still used widely in Czech academia (where applicable). The respective abbreviations are *Mgr*, *Ing* and *Dr*. Thank you, <NAME>, for bringing this up.
Upvotes: 4 <issue_comment>username_3: The situation in Austria is similar to what username_2 Petrman describes. We do like our titles **a lot**.
Although nowadays almost all studies follow the Bachelor/Master system, as an engineering/science graduate one is still allowed to use the traditional title **"Diplomingineur"**, usually abbreviated as **"Dipl-Ing"**, instead of a title indicating the Master's degree. This is done mostly because it has a very high reputation in Austria and other German speaking countries. So once I graduate, I will be allowed to either call myself "username_3, MSc" or "Dipl-Ing username_3". However, calling myself "Dipl-Ing username_3, MSc", which is sort of a wet dream for every title lover, is not allowed (but you do see it sometimes).
The equivalent for non-engineering/science studies was the title **"Magister"**. However, graduates of those studies are only allowed to use their Master's degree (typically a Master of Arts degree), as was the original intention when switching to the Bachelor/Master system.
In regard to titles, Austria has a lot more anachronisms. For example, the title **"Hofrat"** is still in use, it usually comes with a high-ranking government job (it's not an academic degree). The title comes from the good old times when Austria was an Empire: "Rat"¹ means advisor, "Hof" designates the imperial court, so Hofrat literally means "advisor of the imperial court". Although Austria is a Republic for 70 years now, the title is still in use.
So think that's a bit crazy? Well. You can also combine academic and other titles. So a high-ranking government official might feel that he should only be addressed as "<NAME> Dr. Huber". His wife, although not having earned any titles herself, might call herself "<NAME>".
Like I said, we do like our titles.
¹ Pronounced on the "a", not like the English "rat".
Upvotes: 4 <issue_comment>username_4: This question sounds like you can go around calling people *doctor [name]* outside an academic environment. I don't think that's true in English-speaking countries. When I got engaged to my wife (US born and raised), her mother was so excited that she was going to marry someone with a PhD that she started introducing me to the extended family as *my future son-in law, Dr. username_4*. Literally everybody who heard her introducing me like that thought she meant I was a medical doctor.
So, I don't think there is anything weird in not being able to call people *master [name]*. You just can't use academic titles outside academia.
Upvotes: 0 <issue_comment>username_5: Historically, in the US, titles are not emphasized. Part of this has to do with the history of the US rejecting royal authority (ie, knighthood and family/land titles). Another aspect, though, is that academia is already considered pretentious to some degree, and requesting others address you according to your educational title in all social situations won't endear you to others, instead it sets up an unequal relationship.
Today, however, so many people have bachelors (30% of the US) and even masters degrees that it makes little sense to call out your achievement, when so many others around you have attained the same degree. PhDs are still relatively rare.
Lastly, note that it's largely within academia that the title is used on a regular basis - where even more people have bachelors and masters degrees. Usually one addresses their teacher with a title in many cultures (先生 in Japan, for instance). In the US, it's "professor" or "doctor", and usually professor is preferred.
The reality, though, is that other than medical doctors and outside academia, few PhDs that I'm aware of want or expect others to use their title. Demanding someone use a title when they address you is often seen as arrogance.
Upvotes: 3 <issue_comment>username_6: Technically the proper term of respect for an individual with a master's degree is "Mister". I think this usage is similar to that in the Royal Navy, where a ship's master would normally be referred to as mister, as in "Mr. Brown". Common usage has long ago eroded any significance that the title may once have had.
Upvotes: 0
|
2015/05/07
| 644
| 2,721
|
<issue_start>username_0: I'm in the process of finalising my exam for my unit in second semester (based in Australia).
The exam will have 3 essay questions out of a total of 6 that students will receive prior to the exam. They will be asked to prepare responses to these 6 essay questions, but only 3 will appear on the exam (originally I had wanted to do 3/10 as a former professor of mine had done, but was asked to reduce this to 6).
These exams will be assessed against a qualitative rubric with no comments provided.
Other than considering faculty rules/procedures regarding whether or not students can receive feedback from formal examinations (which I still need to inquire about):
* Are there any downsides in providing students qualitative rubrics from their formal exams after final marks have been posted?
* Does the term/nature of a 'Formal Exam' mean that students should not receive feedback?
(As a side note my course has to have an exam as its listed in the 2015 handbook, but next year this has been removed to be 100% in-class assessment).<issue_comment>username_1: One potential drawback comes to mind: if you release the rubrics and/or each student's assessment against these rubrics (it's a bit unclear to me which one you want to do), you will open the gates wide to long, tiresome discussions.
If students don't know the rubrics against which their work is assessed, they will have a harder time arguing that "*obviously* they addressed this topic, so they should get full points."
(I'm not arguing against releasing rubrics and/or assessments just to reduce the teacher's workload. I'm pointing out a potential downside. I'd still argue that releasing at least the rubrics makes sense to help students improve. Just be prepared for discussions. A small percentage of students can be very tenacious in such discussions and appear to spend more time disputing their grade *after* the exam than preparing *before*.)
Upvotes: 2 <issue_comment>username_2: Actually, you shouldn't *require* the students to do anything for which they receive no feedback at all. Even the best students need a bit of confirmation that what they have done is good.
But you don't need to release the rubric for the exam if the scale required for giving feedback is reasonable. For students writing on paper, a note on the paper itself is good, provided that the papers are returned to the students.
In other cases an individual email with a few comments is fine. You can even prepare a lot of those comments in advance or as you grade, so that you can just use copy/paste to write a lot of them.
But learning requires both practice and feedback. That is why professional athletes still use coaches.
Upvotes: 0
|
2015/05/07
| 1,491
| 6,086
|
<issue_start>username_0: **Background:** I work in a field where the use of LaTeX is common but still far from universal. So far I have been lucky in that all my coauthors in the past have been fellow LaTeX users, so collaboration boiled down to creating a shared Dropbox folder with the .tex and .bib files we were working on. Now I'm starting a collaboration with two colleagues who use Word, so they have proposed to collaborate with Google Docs. There are various reasons why this is a bad idea, the main one being that writing this paper is going to require doing things that are easy in LaTeX but difficult and time-consuming in Google Docs (or in a standard word processor, for that matter) ---e.g., Greek letters for variables, assorted math/logic symbols, trees (in the graph-theoretic sense), or frequent crossreferencing in the running text of numbered examples.
**Conflict:** One of my colleagues has already said he has no interest in learning LaTeX. On the other hand, I don't want to go hunting through the Google Docs character map every time I need to insert a non-Latin character.
**Question:** Is there any collaborative writing software that allows including LaTeX tags and environments in a Word-like document? For example, when I'm writing semi-informal things like lecture notes, I can get by with markdown and then generate a pdf with pandoc. I don't know of any online services with similar functionality.<issue_comment>username_1: Excluding LaTeX-focused online collaboration services, such as [ShareLaTeX](http://www.sharelatex.com), due to your new collaborators' preferences, I think that you have pretty much two major options, as follows:
* [**Overleaf (formerly WriteLaTeX)**](http://www.overleaf.com)
* [**Authorea**](http://www.authorea.com)
Both online academic collaboration services support simultaneous use of LaTeX and WYSIWYG **rich text mode**, Authorea also supports some **other formats**, such as Markdown and HTML. Both services (to various extent) offer other nice collaborative features, such as data sharing, version control, revision notes and much more. Due to multi-format support, I would prefer Authorea to Overleaf, however, the final decision should be made upon a comprehensive *comparison* of both services across all available features and your detailed requirements as well as some *testing*.
P.S. Just for completeness, I will mention **two other options**. The first is to use *blog engines* that support both WYSIWYG and LaTeX (most of the major ones do: from WordPress to Jekyll). It's a decent option, but I would prefer one of the above-mentioned dedicated services, for multiple reasons. The second option is to self-host *RStudio Server* (or maybe a *custom Shiny application*), which would allow academic collaborative writing, using RMarkdown, but IMHO this is the worst solution possible, as trying to implement various needed features and solve issues, such as version control integration, would bring you and your collaborators a lot of headaches.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I worked with someone who didn't use LaTeX, and by the last paper we worked on together, we figured out something that worked for us:
* I wrote it in word, peppered with things like`\cite{CiteKey}`, `\caption{\label{fig_some_figure}This is the figure caption}` and `\ref{fig_some_figure}` (i.e. all the crossreferencing stuff).
* The equations (only a handful) were pasted as images from a compiled LaTeX .pdf.
* The figures were converted to .bmp (word doesn't handle any sensible vector graphics formats, or at least didn't in the version I couldn't be bothered to upgrade from).
* The contents of the relevant .bib file was at the end.
The figures needed commenting on and only the text needed editing so it worked out for us. We used word's "track changes" tools for the text edits. At the end I just pasted the text into a text editor, added about 3 `\emph{}`s and compiled. It's a complete hack of a way of working but the extra effort was minimal and all on my side, so a saving of effort compared to writing a paper in word, which realistically was the alternative.
Upvotes: 3 <issue_comment>username_3: If you're willing to drop "online service" requirement you might consider showing them [LyX](http://wiki.lyx.org), which is a WYSWIM/WYSWIG editor that:
* Compiles to latex
* Allows one to add basic formatting as in word
* Allows one to add figures, tables, citations as in word.
* Allows one to add raw LaTeX code
* Has super-usable equation editor that compiles code to latex, but you can insert equations without writing latex (you can write it if you want).

Lyx is not so good at sharing --- but I've collaboratively worked on Lyx files using Git repository (you might use any other versioned service), and it was a very good experience.
There is a slight problem --- if your colleagures don't use any version control system (or don't know what it is), in this case it is possible to:
* On windows: use some Git Gui ([Tortoise Git](https://en.wikipedia.org/wiki/TortoiseGit)). I've seen non-technical people use [Tortoise SVN](https://en.wikipedia.org/wiki/TortoiseSVN) to share documents.
* On Linux: add some scripts that perform VCS operations.
* In comments using Dropbox was suggested --- but Git is obviously better choice in terms of feature-set and stability, Dropbox may be easier to learn.
Upvotes: 4 <issue_comment>username_4: To address the non-latin character problem you mentioned in the "conflict" paragraph, you should use AutoHotKey (Windows) or AutoKey (Linux) to insert the characters for you when you type in specific sequences of keystrokes. The gif explains what I mean. You type /delta and it automatically gets converted to δ (regardless of application).
[](https://i.stack.imgur.com/u1NUI.gif)
Instructions are given [here](https://brushingupscience.wordpress.com/2015/12/28/add-any-symbol-without-leaving-the-keyboard/)
Upvotes: 2
|
2015/05/07
| 1,417
| 6,728
|
<issue_start>username_0: What can be done to correct an error that might be hiding a certain author's sole purpose of publishing a paper, no matter the soundness of their method?
In a nutshell, the particular case that spawned this question can be summarized in a few sentences:
* a paper presents a mathematical construction and proposes a numerical solution for the resulting nonlinear optimization problem
* the problem has many local solutions that are not numerically distinguishable
* it is impossible to ensure convergence to the real solution without actually starting near it
* there are too many local solutions (an infinite set in some cases) to simply test the best sample
* the authors claim unconditional convergence to the true solution, regardless the parameters
* their results are consistent with their goal, but the mathematical formulation is in contradiction with the conclusions simply judging by the properties of the objective equations.
Hence, such results are, at best, a stroke of luck, if not part of a probably much different approach.
After spending almost one month trying to understand not one, but a series of three inter-linked articles on a scientific subject, I have mathematically proven that the central assumptions and claims in that series of papers were wrong and/or incomplete. As the papers were peer reviewed and even published in reputed journals or conferences, I feel there is a need of saving other people the trouble of wasting valuable time trying to reproduce a falsely advertised behaviour. What is the proper action to take in this situation (for example, writing to the editors of the journal)?<issue_comment>username_1: Write a paper explaining what the errors are and how they invalidate the results of the papers in question, then submit it to a journal with good visibility and get it published. Writing directly to the journal editors is appropriate only if you have good evidence that the errors in question are deliberate (e.g., the authors have fabricated data in order to obtain the results they wanted).
Upvotes: 6 [selected_answer]<issue_comment>username_2: >
> I have mathematically proven that the central assumptions and claims in that series of papers were wrong and/or incomplete.
>
>
>
This is a very complicated statement and it is important to understand it in order to know what to do. The two issues are assumptions and claim.
People often make assumptions to solve difficult research problems under the assumed conditions. If your assumptions are too extreme, or worse known to be wrong, then no one will care about your solution. The key issue, however, is that if someone later proves that your assumptions were wrong this does not make your results wrong. It just means that the conditions for which you solved the problem are uninteresting.
Given a set of assumptions, regardless of if they are true or untrue, claims based on those assumptions can be correct, incorrect or incomplete. If in a further investigation you realize that the claims are incorrect or incomplete, or the assumptions needed to obtain the claims are either incomplete (i.e., you need additional assumptions) or incorrect (i.e., the wrong assumption was made not an assumption that was wrong). In these cases there is an issue with the research that should be corrected. Most journals have mechanisms for correcting errors or at least alerting readers to errors.
The concept of proving an assumption to be wrong is a strange idea. An assumption is an idea that you take to be true while an idea that will be subjected to testing is generally called a hypothesis (or in mathematics I believe a conjecture). Taking someone else's assumption and hypothesizing that it is true (or false) and then testing that can be very valuable research. Proving that the assumed conditions do not occur reduces the importance of the previous research that assumes the conditions occur, but it does not change whether that previous research is correct or incorrect. In this case you need to decide if the proof that the conditions do not occur is interesting enough to publish.
Upvotes: 5 <issue_comment>username_3: Approach the authors to explain them the issues and ask them how to work together on publishing a rebuke of their own papers. If possible, do so with the help of someone you know who knows them.
Unless you are an authority in the field, in which case you can do whatever you want, it's much easier and more productive to "withdraw" published claims with the authors' collaboration. Also, you might earn a long-term collaboration with their research group.
Suggestion based on how a professor I know (having a h-index around 50) solved such an issue.
Upvotes: 0 <issue_comment>username_4: Basically there's nothing you can do. I had exactly the same experience with a paper written by some Ivy League computer scientists whose algrotihm I was supposed to implement. Their papers contained no information about how they chose the starting points for their optimization, which was a serious problem because there were many local maxima. They had written a software package but it had been withdrawn from circulation.
I discussed my problems with other researchers in the field and was told that it's generally known that "there are problems with that paper." That's as far as it goes, really. It would be nice if there was some way of calling them to account for wasting peoples' time and making claims that they couldn't substantiate, but there's really nothing you can do.
Upvotes: 3 <issue_comment>username_5: Of course, you could write a paper proving the original work wrong and try to get it published. But experience shows that such papers tend to have a hard time in review, because they challenge a (more or less) widely accepted method. Just consider, even if the method is not guaranteed to yield the optimal result, it may still be "good enough" to be useful for many purposes. Simply refuting the method is thus only of limited use for the scientific community - in absence of a better method they will just keep on using it. Your paper may end up being simply ignored.
Ideally, you'd start from your proof that the method doesn't work and develop an improved method that avoids some of the shortcomings of the original method. If you can show that your new method really improves performance, you'll be in a good position to publish with good visibility. Such an approach would also be a lot more useful for the scientific community, because you don't simply refute the original, but improve on it.
And make sure you publish that code under an open source license in a persistent, public repository, so everyone can use and improve it.
Upvotes: 1
|
2015/05/07
| 2,271
| 9,734
|
<issue_start>username_0: I recently took a class where the Professor created multiple online study guides, on popular student sites, which intentionally contained the wrong answers. He did this because his exams were based heavily on the end of chapter questions in the book. Also, the Professor told us at the start of class using Google to help answer the questions would be useless because all the top results regarding these chapter questions were his wrongly answered guides.
Was this ethical of the Professor to do? The book used in the class contained no answer (or partial answer) keys and no additional student material was provided for the book. I understand he did this in order to force students to read the book and keep them from simply Googling all the answers at the end of the chapter. However, this removed a way for students to verify their answers were correct.
Edit: In response to some of the comments, he was not bluffing. During the course I was unable to answer one of the questions (it turned out to be a misprint in the book) and tried using online resources. The wrong guides were easy to spot, because half the chapter questions involved determining which SQL statements were valid. Some of the guides were simply answered as A, B, C, D, A, B in order down the list. Other were just flat out wrong. Also, as far as the exams went, I shouldn't say they were heavily based on the book. The Professor literally copy and pasted the questions from his instructors manual and didn't bother changing any wording or the answer order.<issue_comment>username_1: Well, like everyone he is free to post on the internet anything he wants that is considered legal. Wrong answers are legal from that perspective.
Regardless what he intends to teach/enforce his students, there are likely other people who google similar questions and are mislead by his "trolling" (to use internet jargon). For a professor, who should be teaching and spreading knowledge to people, spreading misinformation to intentionally mislead readers is wrong and unethical.
Furthermore, I guess the book is not very well written, the books I studied, usually had questions which challenge the general understanding of the subject rather than a google-able fact.
In the end, I don't think there is anything you or anyone else can do about him. Even under pressure, the professor can simply post anonymously and don't tell the students up front.
PS I'm really perplexed that someone takes time to systematically alter the perception of some topics on the internet, instead of only adjusting the (exam) questions like every other teacher I know does. This tilting at windmills shows extreme weakness of character in my eyes.
Upvotes: 1 <issue_comment>username_2: I have a strong negative opinion on this.
In 2002, I joined a PhD program and was at the same level of computer science education as peers who had recently completed CS degrees at good schools. My last prior formal CS education was a master's degree that I completed in 1975.
I achieved that, as well as staying employable in the computer industry for over 30 years, by continuous independent study. As computer science kept changing around me I felt at times like the Red Queen in Alice in Wonderland: "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"
Not having answers to the questions in a textbook was something I could handle, though undesirable. If I had found answers on-line that conflicted with my answers I could have wasted a lot of time trying to resolve the discrepancy, including trying to contact the answer author to point out an error.
Wrong answers to questions in a good textbook are particularly destructive. I put a lot of effort into selecting the books I use. In order to progress, it is necessary to attempt exercises that are a stretch. In some cases, it is difficult to check whether an answer is correct. Searching on-line for answers may be the best available resource.
The professor was, intentionally or not, sabotaging independent study.
Upvotes: 8 [selected_answer]<issue_comment>username_3: This "solution" you've presented [was brought up in a related question from a while ago](https://academia.stackexchange.com/a/30012/22013).
>
> ...Then I went to Yahoo Answers, made a bunch of fake accounts, and posted tantalizingly wrong answers to all of my own HW questions. I have told all subsequent students not to google the HW answers because there are wrong solutions out there.
>
>
>
The consensus at the time was that this is not appropriate, and ultimately impedes the process learning for most of the community for the "benefit" of preventing cheating in your class.
Let's consider the action your professor has taken.
>
> Also, the Professor told us at the start of class using Google to help answer the questions would be useless because all the top results regarding these chapter questions were his wrongly answered guides.
>
>
>
Let's look at the effort spent trying to do this. The professor found the right answers, and then purposely answered them wrong, and published them around the web to "solve" his problem for a relatively "personal" benefit of ensuring academic integrity in his own course, at the expense of *every* student of that course in the world.
At the very least, it's not helping anyone. At the very most, if he is using his *position* as a professor (i.e. actually listing his credentials/qualifications) for these study guides, that would raise additional red flags that could potentially be grounds for something that the university might need to be made aware of.
>
> He did this because his exams were based heavily on the end of chapter questions in the book.
>
>
>
In my opinion, he should have instead used his time to write exams that were not so heavily based on end of chapter questions in the book.
Upvotes: 6 <issue_comment>username_4: It is massively unethical, because the internet does not exist in a vacuum.
Consider the possibility that someone looking for a valid answer online, because they do not have or cannot afford this textbook, finds your professor's answer and assumes it is correct. Because of his deception, he has misled this and every other person who seeks this answer by knowingly posting the wrong answer himself.
It is also ultimately futile and harmful to the learning process - it discourages students from trying to use all resources available to them, discounts the possibility that the *text* could be wrong, and gives students a sense that they are being cheated by the professor.
It is unethical, it is detrimental to the learning process, and frankly his disrespect and sabotage of online resources makes him look like a luddite.
Upvotes: 6 <issue_comment>username_5: I agree, this is somewhat **unethical**.
I can understand wanting to discourage just Googling the answers - however, where the unethicality of kicks in is for those people who *dont't* know that this professor has *deliberately* poisoned the well and released study guides that are flat out *wrong* - and pushed those wrong guides high enough in the results lists to be common.
Now, not knowing this, a person unrelated to the class in question gets a hold of this guide; they *might* try to use said guide to try to learn the material in question - only to be using the flat out wrong materials, and not knowing they were *deliberately* made to be *wrong*.
Depending on the *knowledge* level of the querier, they *might* figure out the guide was wrong - but, what if they don't know any better? Since it was pointed out, he deliberately made sure *his* bogus guides were *highly* likely to show up as a result.
I also see a problem in the fact it's being pushed up to a global search engine - so, it's spreading misinformation to more than just the class.
I mean, if it were just doing Professor X's CS310 class study guides - OK, bad form poisioning those specific key words - but Google sub-parses the documents, so now, *other* people see these results.
Sounds like he is just too lazy to write up good questions and wants to just cut and paste his Teacher's Edition textbook questions.
I just wonder what his teacher/course evaluations are like, and what *his* supervisors and those higher up the food chain think of his practices and doing this? Maybe it's time (or *just* after you get his grade) to go to the head of the department and/or the dean of students and figure out what is up and why this is an acceptable practice, and why you feel it's unacceptable and how this kind of inhibits self-learning. Pretty much, come in ready to defend your position. If it's not just you, well, then I would make it more the merrier and show the administration how you feel.
Perhaps they aren't aware of exactly what he's doing, and the extent to which he's going about it.
Defend your position, but don't necessarily come off as overly troublesome and trying to make drama - more as concerned about this issue.
Upvotes: 2 <issue_comment>username_6: If an academic publishes incorrect information that's an ethical problem. Academics who lie when publishing information about their domain of expertise when it suits them can't be trusted to be honest when they publish scientific papers.
I would look at the ethical guidelines of your university to determine what they have to say about lying and deliberately publishing incorrect information. If those guidelines not only forbid lying in scientific papers but are more broad, forward information about the case to the relevant authority in your university that deals with ethical breaches.
Upvotes: 4
|
2015/05/07
| 1,398
| 5,828
|
<issue_start>username_0: When repeating a course, why are the grades averaged for GPA?
For example, a student takes course X and receives a C (2.0) and then retakes the course and receives an A (4.0), my university (and I believe most universities in the US at least) will end up with a "B" (3.0) impact on their GPA.
The student has exhibited an "A" level of "knowledge" so why penalize them. If the student attempted to learn on their own and did not attain "requisite" understanding prior to attending the course but ended doing very well in the course they would not be penalized.
Or perhaps offer two measure's of GPA, one as a summation of everything and one that only includes the students best performance for each course taken?<issue_comment>username_1: An average is used to reduce the impact of grade inflation and encourage people to do well the first time. You always take something more seriously, if it always counts for something.
If not, people could take a hard course once to get all the tests and answers, and then "do it again" to get an A in the class, and have no evidence of the D from the first run.
Upvotes: 1 <issue_comment>username_2: As others have said, different universities -- and even different programs within a university -- have different policies about when repeating a course is allowed and if so, how it counts towards the GPA.
Actually, it's even more complicated than this: the same student enrolled in the same program can have multiple GPA's! It is my understanding that at many if not most American universities, there is one GPA that is simply the average of the numerical scores (on a 4 point scale) over all courses one has taken. With respect to this system, if you take a course once and get an A and then again and get a C, then it does not go into your GPA as a 3.0; it goes in as a 4.0 and a 2.0. The average is the same, but the weight is twice as much, reflecting the fact that you completed two courses rather than just one. However, GPA "in your major" may be counted with a different weighting system.
You seem to think that there's something unfair about averaging the grades. To me, it sounds like a generous system: and in fact, I don't think that it is guaranteed that a student is allowed to repeat a course to improve their grade if the grade they got the first time around is considered "satisfactory". This practice again depends on the university and the program.
You write:
>
> The student has exhibited an "A" level of "knowledge" so why penalize them.
>
>
>
Sorry, where is the penalty you speak of? They got a C and they got an A, and both are being recorded.
>
> If the student attempted to learn on their own and did not attain "requisite" understanding prior to attending the course but ended doing very well in the course they would not be penalized.
>
>
>
Again, I don't see how anyone is being penalized. A low grade is not a penalty.
>
> Or perhaps offer two measures of GPA, one as a summation of everything and one that only includes the students best performance for each course taken?
>
>
>
If you want to present this alternate calculation of your GPA to someone else, you can. But you also have to record the official GPA.
Finally, I think that the practice of allowing a student to take a course multiple times and only record the best grade is a poor one, for several reasons:
i) It is fundamentally not transparent. The standard unit of coursework in most American universities is the semester-long course. You are registered for a certain number of courses each semester, and you get grades for each of them. If grades get replaced by later grades, this kind of information is lost, and it may be that the student is recorded as taking nothing during that semester! This leads to:
ii) Because high course grade are very desirable, this practice would encourage students who got good but not optimal grades to repeat the course until they got the highest possible grade. If you think this through soberly for a while, you'll see it's just a bad idea. The whole point of a grade like a B is that it's good enough to move on to the next course. Students who repeat courses until they get A's could take much longer getting through the program. It will create a culture where most students who are taking the course the first time are taking it "for practice only". If courses were heavily populated with more advanced students who did well the first time around and are insisting on taking it again to get the highest possible grade, then that could seriously skew where the course is being pitched, which would encourage yet more students to repeat every course they take. Finally:
(iii) We do not want students to be able to graduate with a perfect GPA if they spend seven years in college instead of four. Moreover -- sorry to divulge the ugly truth -- **we don't want a substantial percentage of college students to have the highest, or essentially the highest, possible GPA**. While the competitive nature of GPAs in the current university environment has many unpleasant aspects, the vast majority of American culture is fully bought into it...especially including the students. If 50% of every class got a 4.0 GPA then most of the advantages of having such a high GPA would evaporate, and employers / graduate schools would hire you based on other metrics. (Note that people often talk about grade inflation, but really grade inflation is only a problem to those who are not familiar with the current university system. For everyone who is sufficiently informed, it doesn't matter whether the average grade in a course is a C or a B+. Still, after taking about 40 courses, there will be a small tail of students who have gotten almost all A's. So there is no problem here.)
Upvotes: 4 [selected_answer]
|
2015/05/07
| 1,513
| 6,626
|
<issue_start>username_0: **Situation:** I’m completing a master’s degree in computer science. Currently, I’m at the final stage and finishing writing my thesis.
Usually, similar theses in my institute have the following structure:
1. Introduction.
2. Literature survey.
3. Proposed solution, method, framework, ...
4. Implementation and case studies.
5. Conclusion.
**Problem:** Actually, the research problem that I’m undertaking is a little complicated and crosses multiple landscapes; it’s about designing data warehouses using ontology. So the way I have followed is bottom-up, providing a chapter of foundations before the solution chapter (point 3 above). These foundations contain multiple basic definitions, constraints and rules that I build upon in the next chapter (including some novel theoretical issues), in order to give the reader the sufficient tools to launch reading my work.
However, my supervisor told me that this is not the best way, and that reader may be confused with all this much of theoretical knowledge in a bottom up fashion, that may lead to getting lost. Instead, he suggested to define any concept only whenever I need to use it, even if it this concept is proposed by me.
**Question:** Though I know that there may be no specific answer. However, in such a situation, which is better? bottom up or top down? Any suggestions?<issue_comment>username_1: I have not enough reputation to comment. So, we will have to go with an answer. I disagree with <NAME> and I agree with your supervisor. Your thesis is not a textbook and front loading in a thesis will not have the intended effect. Frankly, if you start with basically a set of definitions, it is quite likely that your assessors will only cursory read that section. They might then be confused when they read the actual solution chapter because they don't have the necessary understanding of the concepts you thought they would have.
I think you should follow your advisor. Think of your thesis as telling a story where you introduce new concepts at the point where you need them. You can always have a glossary at the end of your thesis to gather all relevant terminology in one place.
The exception to this advice would be if you are writing in an area where there is a lot of confusion around terminology and names of concepts. If your goal is to argue that prior literature was sloppy in naming its concepts and you want to clarify the literature, a discussion of definitions at the start could be warranted.
Upvotes: 3 <issue_comment>username_2: These are just two different styles, and no one can say one is better for all things, though one may tend to work better with certain manuscripts with certain audiences. In this case, since your advisor has specifically suggested top-down as being better, try it his way. Trust your advisor.
It's not clear from the question why you think the top-down approach is not as good. If you're worried that there are too many new definitions etc. that people will have trouble remembering and be difficult to locate, this can be solved with an index or index of notation and/or creating subsections at the appropriate places to treat new concepts in a mini-isolated environment where they're introduced. For readers who already know these concepts, they can just skip those subsections.
Upvotes: 1 <issue_comment>username_3: It's very common to see
1. Introduction.
2. Literature survey.
3. Contribution 1
1. Proposed solution, method, framework...
2. Implementation and case studies.
4. Contribution 2
1. Proposed solution, method, framework...
2. Implementation and case studies.
5. Contribution 3
1. Proposed solution, method, framework...
2. Implementation and case studies.
6. Conclusion
My own dissertation was originally structured like you have in the question, with all results coming after all framework. This got very negative feedback at my oral defense, and I reorganized it as shown prior to signoff and deposit.
The negative feedback wasn't specifically directed at the structure, but at the fact that I had failed to coherently link everything together, and the chosen structure was a major cause.
In contrast my Powerpoint deck was organized around the three contributions, and was received very positively, which helped me reach the decision to use the same approach for the dissertation.
Upvotes: 1 <issue_comment>username_4: Ultimately, I think that you should follow your supervisor's advice, unless he/she offers you enough *freedom* in that regard. However, I would like to emphasize the following **aspects**:
* Your thesis' **structure** is *very typical* for a single topic scholarly works (the rarer alternative, used more frequently in Ph.D. dissertations, is a collection of essays on a common theme) and has a solid *research methodology* foundation, so I wouldn't worry about that at all, unless your supervisor require you to make structural modifications (in that case, I would politely present my arguments, but comply, if meeting the requirements is demanded upon).
* **Terminology**-wise, I think that the above-mentioned standard approach to *structuring* academic work should not be called *bottom-up* - if you insist on using such terminology, I would rather refer to it as **top-down**, since you analyze the topic from more *general concepts* to more *detailed* (discussion of rationale for *choosing* between those two approaches is beyond this scope of this answer, but, briefly speaking, I think that it mostly depends on the availability of some *theoretical foundation* on the topic: if it exists, then I'd use top-down approach, otherwise - bottom-up).
Upvotes: 1 <issue_comment>username_5: I'd just like to provide a bit of personal perspective. Last week I defended my Masters thesis in Mechanical Engineering. I wrote it in the style you call "bottom up."
My thesis was an experimental comparison of different published literature methods for testing thermoelectrics, so I started with a chapter on thermoelectrics, followed it with a chapter on the testing methods, and then moved to the more traditional goals, methods, results, conclusions for the remaining chapters.
By putting the requisite technical background at the front, I was able to make my discussion of the methods in the experimental section more succinct and discussion of the basic mechanisms mostly out of my conclusions, but I did repeat key concepts *throughout* the thesis. I think that it is possible to write in a "bottom-up" manner, but remember that you may still have to briefly summarize concepts as they come up in the work.
Upvotes: 0
|
2015/05/07
| 662
| 2,702
|
<issue_start>username_0: I would like to pursue summer college courses - particularly a creative writing class and a computer science class - to help broaden my knowledge and improve upon myself as a person.
However, looking at some college courses in the local city, tuition alone for a single undergrad course is [upward of $800 for a three-credit Summer class](http://www.albany.edu/studentaccounts/ANTICIPATED_(3)__Summer_2015_Per_Credit_UNCAPPED.pdf).
This seems a little excessive. Is this a typical cost for college summer courses in most areas? And is there a cheaper alternative if I'm seeking to expand my knowledge pool, and not necessarily seeking academic credit hours?<issue_comment>username_1: The cost of summer courses in the United States is likely to be as staggeringly variable as the the cost of courses during the semester. To the best of my knowledge, most universities generally do not change their tuition fees significantly during the summer.
As such, it will range across at least two orders of magnitude depending on the school you are dealing with, and can easily change by nearly an order of magnitude depending on the type of student that you are. Consider, for example, [this table of tuition rates published by the University of Iowa](https://www.maui.uiowa.edu/maui/pub/tuition/rates.page): you will find that summer course charges are almost identical to semester charges, and differ wildly for Iowa residents and students coming from out of state.
Upvotes: 3 <issue_comment>username_2: >
> to help broaden my knowledge and improve upon myself as a person
>
>
>
If you don't need the credits toward a degree, then the solution is to audit the classes. What might trip you up is that summer programs often do not allow auditing, whereas the fall and spring semesters do.
The audit fee is often on the order of 10% of the normal charge.
However, even that has a potential solution. Send an email to the instructor, asking him/her to give you a call please. (What you're going to say needs to be said in person or on the phone, not in an email.)
Explain that you would like to audit his class, but since auditing isn't permitted at this institution during the summer session, you'd like permission to do an informal audit. If the instructor is a reasonable person, the answer will be yes.
---
On the other hand, if you need the credits, take a look at <http://www.collegecalc.org/> to compare tuition rates.
---
Please note, you can get a very good creative writing course at some community colleges. Sometimes that's the best place to get individualized attention. Don't forget to check ratemyprofessor.com before you register for classes.
Upvotes: -1
|
2015/05/07
| 819
| 3,464
|
<issue_start>username_0: I have two offers for a Visiting Assistant Professor position. Can I negotiate for a higher salary for a VAP position?<issue_comment>username_1: You can certainly try to politely negotiate salary, but I wouldn't count on getting much if anything at all.
At my institution, my experience has been that the administration simply won't negotiate on salary. For a VAP, the department is simply looking for someone who can handle teaching load for a year or two. There are typically many qualified applicants with enough teaching experience, and because the position is temporary there just isn't any point in paying more than necessary.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Brian is right that you're unlikely to make much headway. I think maybe he isn't recognizing the variety of positions that go under VAP (some of whom have serious research expectations), but still, I expect they will have relatively set salaries; it could a lot of work and bureaucracy for the chair to request the change even if s/he wants to. It's often a bit easier for chairs to sneak other things into offers. You might be more likely to be successful if you asked for, say, 1k in research funds rather than salary, since often those funds are more flexible.
Upvotes: 4 <issue_comment>username_3: Ben has precisely shown the issue, funds for these positions are difficult to change. If you politely query research or accommodation funds however you may find they are able to allocate from the department budget or various grants as well as pre-purchase items such as laptops (within reason).
Upvotes: 2 <issue_comment>username_4: It may depend, to some extent, on how much the university wants you, and on how much flexibility they have. If the school doesn't often hire visiting professors, there is more chance they will treat your hire like any other hire (although not tenure track) with some room for negotiation. If they hire numerous VAPs each year, they are more likely to think of it as a more routine kind of hiring with fixed parameters.
Here is a personal anecdote. I had a one year VAP at one point. That year, I also had the option of remaining at my PhD school as an instructor, which would have been less ideal but which would have certainly been cheaper. I managed to convince/negotiate with the other school to raise the salary slightly to compensate (not more than a couple thousand, plus some moving expenses). But, I believe they really wanted me to come.
One thing to keep in mind is the [BATNA](https://en.wikipedia.org/wiki/Best_alternative_to_a_negotiated_agreement) on both sides. For example, do you have alternate positions available? Does the school likely have equally qualified candidates to choose from if you don't accept the offer? There was a [news story last year](https://www.insidehighered.com/news/2014/03/13/lost-faculty-job-offer-raises-questions-about-negotiation-strategy) about a tenure-track job offer that was derailed by negotiations that the university thought were too aggressive, so it may help to inquire informally about how much flexibility there is on the salary.
Upvotes: 2 <issue_comment>username_5: To add to the other answers, if this position is covered by a union contract, that contract will specify if the salary is negotiable. Typically these contracts discourage individual salary negotiation. Often the contract is available from the institution or union website.
Upvotes: 2
|
2015/05/07
| 483
| 2,071
|
<issue_start>username_0: From September I will be starting my final year project at a UK university.
I will be developing some software. However the software I wish to create has multiple "modules" to it. Approximately 16 modules for 16 different functions.
A fair few of these modules have been created before by various people and the code is completely open source with no licensing restrictions.
I will be writing a lot of the code myself, but I do not see the point in "re-inventing the wheel" by writing some code which will do the exact same thing as some software already out there.
Is it okay to re-use their code in my project? To what level is it okay to do this? 2/16 modules? 6/16? I do not want to go over the top into the plagiarism world. Obviously all work would be credited etc.<issue_comment>username_1: You absolutely must not use the work of others without attribution. If you credit everything you use in an adequate way, there is absolutely no limit on how much stuff from other people you can use. However, for your project to be successful, you will need to have a sufficient amount of your own contributions. To see what counts as sufficient, look at past projects and ask your supervisor.
Upvotes: 3 <issue_comment>username_2: Writing your own software for the sake of not using something that is out there is disingenuous - unless your goal is to learn how to create, say, a quick sort routine from scratch, you should absolutely "stand on the shoulders of giants". By using whatever exists, you will be able to "see further" - the intellectual effort and contribution will come from "boldly going where no man has gone before".
So
* full attribution of open source material used
* emphasis on something new
* work even harder than if you rolled your own code from scratch: since you had a head start you will be expected to reach a higher level
And of course check with your course director / advisor / supervisor. The above makes sense to me, but it may depend on *what the educational goal of the project actually is*.
Upvotes: 1
|
2015/05/08
| 305
| 1,304
|
<issue_start>username_0: I'm currently in first year of MS by Research in computer science. The first year includes coursework and second year is research thesis.
I've been working part time 20 hours a week in a software company. I have 4 subjects. So far I have managed to complete assignments but haven't really studied well.
Now as the semester is close to end, I have projects, assignments and exams. I'm struggling to decide if I should continue working here. Sometimes I think it's doable but it's a little risky. I may not get proper marks to progress to next semester. What would you do in such situation?<issue_comment>username_1: It is most certainly possible to get a research degree while working part-time --- or even full-time. I know a number of people who have done it. As you are finding, it is quite rigorous, however. In many cases, it may be better to take the *coursework* part time, so that you have enough space to really absorb the material, rather than just trying to hit passing marks.
Upvotes: 2 <issue_comment>username_2: Another option might be to focus on your studies full time, without working. Do you have savings that would permit this? Can you get a teaching assistantship? What about a student loan? Perhaps your employer would give you a leave of absence.
Upvotes: 0
|
2015/05/08
| 558
| 2,391
|
<issue_start>username_0: I am curious to know what actions are generally thought to be acceptable when one sends a request for a paper review, especially a journal one.
When academics are asked to review a manuscript and they have the relevant expertise, is it common for them to decline the review without giving any reason?<issue_comment>username_1: Speaking as an editor: generally if somebody declines they do *not* give a reason.
The biggest reason for this is that the automated "please say yes or no" forms generally don't force people to explain why they are saying no. So they don't say.
I think this is a good thing. If somebody doesn't feel they are willing/able to put in the effort to give a thorough and professional review, I would much rather they be able to simply and easily say "no" rather than either being pressured into saying yes or having to come up with excuses.
The people who I really **love** though, are the ones who point me at somebody else who wants to do the review. I don't care why they aren't reviewing; I do care about finding competent and professional peer reviewers.
Upvotes: 4 <issue_comment>username_2: Established scientists receive a large number of review requests. A few years back I decided to tally the review requests I received, and the number I accepted. In the end I received 112 and accepted 28 over the course of a year. Add to that the editorial work I did for three journals and you get an awful lot of uncompensated, unrewarded time invested for the good of the field. (I typically spend about eight hours a week on reviews and editorial work.) So yes, I declined a large number of review requests every year, many of which I would have been well qualified to handle. When the editorial system allows, I try to at least leave a note explaining that I'm too busy, but as username_1 notes, this is not always an option.
So yes, qualified reviewers can and do turn down many review requests, often without providing an explanation. As an editor, I know that if I ask well known researchers this will happen as often as not, and I don't mind that one bit. What I do dislike is when people don't bother to click the "decline" link on the review request, and force me to wait for a week before giving up and trying to find a new reviewer. This slows down the review process and ultimately slows down science for everyone.
Upvotes: 5
|
2015/05/08
| 911
| 3,912
|
<issue_start>username_0: After a couple of years in a decent grad school, passing quals, many times thinking about quitting (the usual: impostor syndrome, stress etc), and one paper. I feel like I have a clear head to make this decision. I do not want continue with a career path in academia, I want to be an engineer (or other technical field). In short, I will be happier that way and need to support my family with a better income.
I love physics and it will always be a part of my life and would love to still work with it. Currently, I work in experimental condensed matter: know a lot of fabrication techniques, studied lots of mechanics, can build circuits and perform fairly complicated electronic measurements, written programs for analysis (mostly Matlab, but some C in the past). The list can go on... I feel like the transition with this track record shouldn't be hard - but I don't want to be misinformed or get too "cocky".
**What can I do to be a competitive applicant that will have to contend with people that have engineering degrees and training?**<issue_comment>username_1: First of all, Physics is part of engineering. Not in so much depth but still if you chose electronics or electrical field in engineering, you will still be doing physics.
Just rethink on why you want to do engineering actually?
Is it because you want a better income in your future for your family or you really have your interests. I would say just follow your interests.
It will not be right to change your career just because of good income as their are lots of unemployed engineering grads also.
Just follow your dreams and be happy.
Upvotes: 0 <issue_comment>username_2: You probably already know that, but, supporting @scaaahu's advice, I want to emphasize that you should have an industry-focused *resume* versus an academia-focused *CV*. The latter should focus on your **skills**, but since your future engineering work will likely be in industry R&D or similar domain, make sure that relevant research and education details are mentioned in your resume as well.
Speaking about a potential strategy for transitioning from academia to industry, one IMHO easiest approach would be to try to find positions, relevant to your skills, experience and goals, in high-level research facilities. For example, for condensed matter physics, you could take a look at (here I assume that you reside in the USA) corresponding departments at [Brookhaven National Laboratory](http://www.bnl.gov/cmpmsd) and [many other US government labs](http://www.loc.gov/rr/scitech/selected-internet/physics.html#phy_lab) as well as [Harvard](http://weitzlab.seas.harvard.edu) and many other research universities, hosting experimental physics labs. While many of positions there might require specific training and degrees, I'm sure that some don't (probably, highly dependent on area and institution), so your expertise and skills would be enough to secure employment. Best of luck!
Upvotes: 1 <issue_comment>username_3: There are actually a few areas where you might stand out more than someone with a straight up engineering background.
1. Look at specific types of organizations like Space research and exploration organizations. These typically have other PhDs in Physics in their program and you might have a better shot. A collection of such roles is here:
<http://tapwage.com/channel/space-doctor>
2. Another area are interesting startups. Something like 3D Printing focused startups are looking for bright scientists. Given your experience with mechanics, electronics and tools like Matlab, that could be an interesting area and startups might be more amenable to looking beyond very structured educational experiences as they look for good talent
<http://tapwage.com/channel/engineer-in-3-dimensions>
Ping me via the tapwage.com site if you have further questions or clarifications and I'd be happy to answer.
Upvotes: 2
|
2015/05/08
| 2,329
| 9,338
|
<issue_start>username_0: I'm really not sure how to proceed from here.
I'm an undergraduate Computer Science student, planning to earn my MSc in the next couple of years.
A couple of weeks ago, I spoke with a professor (I'll call him A) on an informal setting, and, since he was my teacher on an area relevant to the subject, I discussed with him an idea I had, and about how I was trying to make it my own thesis project.
I have also previously "pitched" this idea to another professor (B). He liked it, and we agreed that once I graduated, he was willing to be my advisor, and I could work on it.
However, I was recently shown the list of thesis proposals for current students, and my idea was among them - being supervised by prof. A, and an unrelated professor, C.
The details are so close to what we have discussed that this is almost surely no coincidence.
Now, I can't say I'm 100% sure he stole my idea, but he didn't mention any of this when we talked, and this thesis list was made *after* our conversation.
Anything could have happened: either I'm right, or prof. C actually came up with the same idea, or prof. A had a similar idea in the past but didn't tell me about it.
I don't know how to proceed. I am already skeptical of academia - seeing my peers work on projects and write papers that didn't interest them, for the sole purpose of earning an MSc, means I'm not interested on working for it unless it's a subject I really care about. That is - I either find anything I *want* to work on, or I don't care about earning an MSc at all. This was the case. I found a project I wanted to work on.
My question is on the ethics of what may have happened. I have zero experience on academia or research environments, but I have always considered ideas as important as research. That is, copying an idea is as serious to me as copying research. This would fall into plagiarism.
* Is this perspective shared among actual academics? Am I naive in thinking like this?
* Am I right in wanting to "keep" an idea to myself, in order to work on it later?
* Should I just stay quiet the next time? Shouldn't I be able to discuss these kind of things with people I consider to be honest or "bona fide", without having to be afraid of being copied?
I see multiple possible courses of action, but in the end, I don't know what I want to achieve. The thesis is already assigned, so I probably won't be able to take it away from the student who got it. I also don't know whether this is an ethics violation from prof. A or not, since this is "just an idea" that I voluntarily shared with him.
I can confront prof. A about this, either sending him an e-mail or speaking with him in person, to get his side of the story.
Or I can go to prof. B and tell him about this situation, to see what are his thoughts on it. He looked interested on working with me on this, so maybe he'll know better how to proceed.
What is the best (or "a good") course of action now? I don't feel I should forget this whole story and move on to a different subject, but I don't see what can be done now.<issue_comment>username_1: An idea is an important part of research, but copying an (unpublished) idea is not entirely plagiarism, I fear. Academia is dealing with a two-bladed sword here.
On the one hand, it is about plagiarism and authorship and credit; on the other hand, really good ideas have to be preserved, even in case you lose interest in the matters (or drop out or sth like that).
I guess Professor A assumed that, since you are years away from your degree, and since master thesises, unlike PhDs, are not strictly required to dig into a completely new topic, academia can cope with multiple master thesises on your topic, and decided that he cannot wait to push such an interesting topic until after you publish your initial work, should that ever be the case. I don't know about your university, but at mine, a third-semester C.S. bachelor has a less than 50% chance of actually getting a master's degree. Most dropouts here are "caused" by mathematics and/or engineering basic courses, not by CS classes, and they also befall CS students who score above average in the CS classes.
Now, if you really want to pursue that issue, there are three leverages you have. Let's call them A, B and C:
* Depending on your current situation, you can state to A that the "idea" of that master thesis is what you start working on very soon, and ask him to take the topic off his list for a certain amount of time. I personally do not believe that he will take it off his list for as long as three or four years; this is a really long time in CS.
* You may also talk to your to-be-supervisor B, state to him the fact that you presented your ideas to A on that matter on , and that this very idea was published on the site only afterwards. If both professors are of the same institute, he should be able to sort it out (but I don't know if he will do it, for the aforementioned reason).
* And you can send an eMail to C, informing him about the very same fact, and ask him whether he can definitely say that the idea of A is older, and if not, whether he is comfortable with being part of a situation of stolen ideas. This may only have the effect that C does not do co-supervision.
I personally would recommend against any of these actions, they burn bridges without promising any positive effect. But you should consider it a valuable lesson learned for your PhD:
**Mouth shut, ears open!**
Upvotes: -1 <issue_comment>username_2: I think it's very clear that you should talk with (B). You had previously discussed this idea with (B) so s/he knows that it was something that you had been considering. Furthermore, (B) also presumably knows the situation within the department better than you. Maybe (A) is known for doing things like this. Maybe the overlap isn't as serious as you think and (B) can reassure you that you can still do your thesis as you wanted to. No one here can decide whether either one of these is the case.
One word of advice: in talking with (B), present this situation as something that you are concerned about, but stay unemotional and stick to the facts: (B) knows that you were interested in this similar project, you discussed it with (A), and now another student is doing this project with (A). You do not need to connect the dots and accuse (A) directly of plagiarism or dishonest conduct in an initial conversation with (B). If this is a misunderstanding or miscommunication then doing so makes you seem aggressive and prone to jumping to conclusions. If (A) did something that is as blatantly dishonest as you have suggested, then that will be obvious to (B) as well.
Upvotes: 5 <issue_comment>username_3: Anybody can steal an idea. Theres not much worth of an idea in itself. The main thing is the execution. That nobody can steal. IF you believe so much in your idea. Why don't you make it a reality.
Before Google there were 100s of search engines. So idea in itself is worth a dime. It's the real tangible form of it which matters. If you like it so much , go for it before anyone else.
Upvotes: -1 <issue_comment>username_4: I agree that you should talk to B about it.
In the meanwhile, let me offer another perspective. Given that you are an undergrad, it is possible that you are not experienced enough to judge:
1) what the research frontiers are in the field
2) how "close" two ideas are
For example, it may be the case that your idea was something general like "I would like to use technique X on problem Y". But the novelty may come from the fact that technique X needs to be modified drastically on problem Y (e.g. if Y is a super large dataset). In this case, ostensibly the ideas are similar, but in fact the core of the idea, at least from a "contribution to science" perspective, is different.
Upvotes: 3 <issue_comment>username_5: This is something everyone in academia is concerned about. I agree that you should discuss it with professor B as suggested in the positively-voted answers above. In addition, you can after that do a series of quick research pertaining to your algorithm and produce something publishable, possibly collaborating with professor B. A conference paper is a good target.
Upvotes: 0 <issue_comment>username_6: I suspect it will still be possible to ask the student (call him (D)) to change projects. Changing research areas happens all the time, and while it won't be ideal for (D), surely (A) or (C) will be able to find another project for him, or (D) can change projects. It's more unfair to you to have your idea stolen than it is to (D) to change topics.
I would probably have a word with your department's (or your smallest organisational unit's) head of postgraduate research about this. They can probably have a word with (A) and ask them to change (D)'s topic, hopefully without disciplinary action. This will be much easier if you have a written record of having discussed the idea with (B), as well as any detailed, date-stamped notes you took on it. As everyone else has said, I would talk to (B) before this to get his perspective, and ask (B) if he has any written records of discussing your idea. But I would have a plan before you talk to (B), in case (B) doesn't want to push your case himself to protect his relationship with (A).
Upvotes: 1
|
2015/05/08
| 738
| 3,214
|
<issue_start>username_0: I'm a first year masters student. Currently I'm doing coursework but I'll have research in second year. I find all subjects interesting. I feel better when I'm doing subjects including a bit of mathematics and rigorous thinking. Algorithms in particular.
But I'm not really good at it. I try hard but it's just not happening. Still if I'm given opportunity I want to do research in Algorithms.
This will obviously require mathematical thinking.
Is it good to pursue research career in a subject which I'm interested in ignoring the fact that I do not have strong base and understanding of it? Or should I choose something easier based on my abilities?<issue_comment>username_1: Doing research on a subject is the best way to get a good understanding of something. Presumably, if you do reasonably well in the parts of your coursework covering algorithms (in the eyes of the professors), you're doing well enough. Mathematics and rigour are hard, even for most people who are capable of doing good work in the area. Also bear in mind, humans are subject to the [impostor syndrome](http://en.wikipedia.org/wiki/Impostor_syndrome). If you find a professor who knows you and is willing to guide you in research in this area, then I don't see a problem.
Upvotes: 3 <issue_comment>username_2: This is something that happens a lot to students. I have seen some students coming from all over the place into the Computer Science department, which is a little heavy in terms of the Algorithms skills that is required. A great strategy to go about knowing if you will be okay is through Advisers. It is important that they know the skills you possess. They will normally have a plan for you.
Upvotes: 0 <issue_comment>username_3: The question, as you formulated it, is practically impossible to answer with enough *certainty* to be useful. The difficulty lies in the fact that the problem has many specific **factors**, such as your personal and professional goals, family situation, mental toughness (perseverance) and others, and only you are aware of them. Thus, only you can figure this out or, rather, try. Therefore, I will address how I believe you could think about the problem, which is essentially common sense.
I think that, basically, you have two options, which, per your question, you seem to understand: 1) **easier route**, where you choose your research subject and/or topic, based on some arbitrary criteria (for example, based on research interests of a faculty member, which you would like to work with), but *avoiding the tough ones*; 2) **more difficult route**, where you choose your research subject and/or topic, based on essential criteria, such as your interests, *even if the chosen subject/topic is challenging* for you at the moment. In the latter case, you would make **commitment** to master the subject, despite obstacles, in other words, *pushing yourself beyond your current limits*. Based on such *logical framework*, the choice is always yours, and remember that you always have the *freedom to re-evaluate* your decisions at any moment, should changed circumstances, interests or needs *warrant* that. Best wishes for whatever route you decide on.
Upvotes: 1
|
2015/05/08
| 630
| 2,578
|
<issue_start>username_0: For Ph.D. theses, I think nobody would disagree with adding acknowledgements. But what is the stance in different places for a bachelor’s and/or master’s thesis?
When I was doing both, I strictly followed the guidelines laid out by the university which required me to include an abstract in both English and German, introduction, results, discussion, experimental details, a table of contents, a statement that it’s my own work and an appendix (which included literature, copies of the NMR spectra and Perl scripts) — acknowledgements weren’t mentioned anywhere. (The Ph.D. student whom I was working with during the bachelor’s thesis and the subgroup leader where I did my master’s were included in the statement that it’s my own work.)
Now, during my Ph.D. work at a different university, a lab colleague is finally completing his master’s thesis and has different samples from former master students. Most of these included acknowledgements (even though they often didn’t say much).
I found one site with a non-representative poll which was in favour of not including them, the reason being the duration of the work just being too short; but that’s merely anecdotical, not representative.
So outside of universities where either version is required, should a master’s or a bachelor’s thesis include acknowledgements? Is there any kind of general practice?<issue_comment>username_1: Acknowledgments are almost always an optional part of a document that you can choose to include but don't have to. This is true for PhD theses as well as MSc, Diploma theses, or any other kind of document.
The guiding line is: If there are people or institutions who you feel you want to thank or acknowledge their support, then do so. If you think that everyone in your life and your university has let you down, then don't.
Upvotes: 5 [selected_answer]<issue_comment>username_2: *Bachelor thesis*: Nobody will think a second about a missing acknowledgement. If you want to add one: Make it damn short. Your direct advisor, parents or partner, one additional person. No professor, no other coworkers, unless they really went over the top in helping with your work. You don't want to look like an ass-kisser to next year's students. ;-)
*Master*: Not absolutely necessary, but as you worked together with people for half a year or more, you likely have reasons to thank them, and should. Keep it under one page.
*PhD*: If you feel you have to go over one page, make it entertaining. It's the first thing people check in a thesis, so don't be a bore.
Upvotes: 2
|
2015/05/08
| 772
| 3,325
|
<issue_start>username_0: Some papers I wrote recently as an independent researcher, several years after receiving my Bachelor's degree have been accepted at good CS conferences (rank B in the [CORE conference ranking](http://www.core.edu.au/index.php/conference-rankings)).
I'm considering getting a PhD. Is it realistic to expect that these papers can be used to partially fulfill my PhD requirements?<issue_comment>username_1: >
> Is it realistic to expect that these papers can be used to partially fulfill PhD requirements?
>
>
>
Probably not, if you're in the U.S. There's a concept of a "Ph.D. by research", which is awarded on the basis of past research accomplishments. This only barely exists in the U.S., although it's more common in some other countries.
Instead, the way most U.S. Ph.D. programs work is that the dissertation must be based on research conducted as part of the program. One reason I've heard is that overseeing research in person makes it easier to judge how well or honestly you are carrying it out or how much assistance you might be getting. In any case, being unable to use your previous papers in your dissertation is generally not a real obstacle to graduating quickly, if that's your goal. Much of the time in a Ph.D. program is spent developing the ability to do high-level research, and if you already possess that ability upon entering you could finish substantially more quickly than usual. (I'm not convinced that finishing quickly would be a good career move: it's almost always much better to focus on depth rather than speed. However, you shouldn't worry that putting your prior work off-limits for your dissertation will be a burden.)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Although I'm a social scientist, this would not be acceptable in my program. In fact, I was not allowed to use a paper I had published during the program directly in my dissertation.
Of course, you can very likely make use of elements of your previous research (e.g., the literature review and perhaps even data) to help reduce the amount of work required to complete your PhD dissertation. At the end of the day, you are going to have to ask your dissertation committee chair and/or department chair what the rules of your program are -- but at most schools in the US, the department wants to see evidence of your ability to work as independent scholar before they let your graduate.
Upvotes: 1 <issue_comment>username_3: Most academic work is only judged on what you produce during the period of study, so in that sense it is unlikely that an institution would accept previously published papers to tick certain boxes.
It's worth clearing up this issue of publication underlying your question. A PhD is awarded for performing and documenting research of publishable quality during your candidature. It's up to your external examiner to decide whether your work meets that standard, but it helps them make their decision if it has been published (as it means some others have peer-reviewed it).
If your previous work is a solid foundation for your new work, then it may count in your favour, but it would be even better to publish your new work. As you have experience of producing publishable quality research, this goal should be within your grasp :-) Best of luck!
Upvotes: 1
|
2015/05/08
| 1,097
| 4,199
|
<issue_start>username_0: Trying to figure out the difference between these. I've lost completely as different source/answers interpret it differently. Does anyone have clear understanding about what does undergraduate/graduate/post-graduate student mean in USA?
I've seen this nice timeline in [other question](https://academia.stackexchange.com/questions/7469/undergraduate-graduate-or-post-graduate-student-is-that-bachelor-master-phd-o) however answers are still controversial.
As well, trying to figure out who the hell am I - I have a Bachelor in Computer Science, Master in Computer Science and another Master in Business Information Systems. Is that graduate or post-graduate? Also, I'm not a student anymore.<issue_comment>username_1: Part of the confusion may be that these adjectives are used in (at least) two different contexts: to describe *degrees* and to describe *students*.
* An *undergraduate degree* generally means a bachelor's degree (B.S., B.A., etc): a degree requiring about four years of university-level study beyond high school.
* A *graduate degree* or *post-graduate degree* is any higher degree that has a bachelor's degree as a prerequisite, such as a masters or doctoral degree (M.S., M.A., M.F.A., M.B.A,. Ph.D., etc.) Depending on context, this term may also include professional degrees (J.D. for law, M.D. for medicine, D.D.S. for dentistry, D.V.M. for veterinary medicine, etc).
* An *undergraduate student* (or simply *an undergraduate*, or colloquially, *an undergrad*) is a student who does not yet have an undergraduate degree, but is studying to earn one.
* A *graduate student* or *post-graduate student* (or colloquially, a *grad student*) is a student who already has an undergraduate degree and is studying to earn a graduate degree.
So in general, the adjectives *graduate* and *post-graduate* are synonyms. This may seem contradictory (since you might expect *post-graduate* to refer to something after *graduate*) but that is how it's used. I understand this as coming from the fact that a *post-graduate degree* is something that you work toward following your *graduation* (i.e. the moment when you earn your *undergraduate degree*).
My impression is that people within academia generally prefer the term *graduate* to *post-graduate* in both contexts; the word *post-graduate* is used more often by non-academics, to whom the word *graduate* is more likely to seem ambigiuous.
So you could say you have an undergraduate degree and two graduate degrees. You could also describe your degrees as post-graduate degrees for further clarification, which would normally be needed only when speaking to someone not intimately familiar with academia.
Upvotes: 6 [selected_answer]<issue_comment>username_2: "Post-graduate" is simply not a commonly used term in the US. I wouldn't expect Americans to use it consistently. It's not commonly used to describe a person who has graduated with any type of degree; I've found references to "post-graduation" on US/Canadian websites (just referring to the time after someone has graduated), but . In Britain and much of the Commonwealth, "postgraduate" (with no hyphen) refers to studying for a masters or doctoral degree; in the US, we would use "graduate" instead. So, Americans would say you have a "graduate degree," Britons would say you have a "postgraduate" one.
Upvotes: 3 <issue_comment>username_3: I have to say that there is one difference between *graduate* and *post-graduate* -- while both mean "beyond Bachelor's degree", the first refers to *level* and the second to *time*.
If I were asked for a transcript of my graduate studies, I'd have to list several courses taken before receiving my B.S. because these were graduate classes -- designed for and taught to graduate students -- and I was permitted to add the classes by special permission. But if asked about post-graduate studies, those wouldn't be included, since they did not chronologically follow my graduation.
Upvotes: 1 <issue_comment>username_4: A graduate is seen as someone who has successfully met the requisites for qualification on an undergraduate course and has been awarded their certificate of graduation.
Upvotes: -1
|
2015/05/08
| 1,670
| 7,372
|
<issue_start>username_0: Years ago I had a really rough few years of college, inability to focus, tons of stress, manic episodes etc. Instead of going into the drama of it all, the bottom line here is that during those years I cheated in a few classes because I felt paralyzed and paranoid almost all the time; by the time I was able to move about without that weight I was into another manic episode trying to get things done but not finding a way out (at this point I knew something was wrong but not what it was). I was caught and in the aftermath I submitted a statement that I fully accepted the consequences instead of contesting my obvious guilt.
Later I would find out that I had type 2 bipolar syndrome that was untreated for the past decade and I was told that I was lucky that it hadn't gotten worse. Once I was on medication I tried college again at a different school and graduated recently with honors.
My question here is that I'm really interested in going to a masters program but I don't know how to explain my first attempt at college, the failures, and the mental illness or even if I should mention it at all. I don't want to hide things but I'd appreciate any advice on this<issue_comment>username_1: Your outstanding performance in your second attempt should be your focus. Don't let yourself downplay how much you've achieved this time around, because every candidate without your history will be singing their own praises. However, you should, briefly, discuss the previous attempt with as much specificity as you are comfortable in your personal statement. A past failure with mitigating circumstances should not be held against you provided you acknowledge it and your recent performance clearly demonstrates that you've overcome these circumstances.
Upvotes: 4 <issue_comment>username_2: Your whole story sounds like what you would go through during your graduate period. Don't worry about it much. From my experience during the last few months in a Master's Program, there is a bit of panic in the beginning but as long as you understand that failure is a part of the whole process then you should be fine. I failed a lot during my first year but still managed to pick up myself and excel during the last months.
Upvotes: -1 <issue_comment>username_3: If your rough patch was so bad that you MUST address it in your application, your best move is to be honest (but brief!) about what happened, demonstrate how you've moved past your mistakes, and focus on your recent achievements.
I speak from personal experience. My second year of undergrad was rough. I was suicidal, diagnosed with a mental illness, abused my medication, failed classes, cheated, the works. I was caught cheating and received essentially the maximum punishment that wasn't suspension: I had to fail the class, and there would be a permanent note on my official transcript that says "This person received a mandatory fail for class XXX due to violation of the academic code." It was a huge wake-up call. I worked my ass off for the next few years, pulled my grades up, and ended up getting into my top choice Master's program (Ivy!)! If you told me that in undergrad I would've laughed because I thought no grad school would accept me. Here's what to do:
**1. Mention it only if you have to.**
Only mention your mistake in your application if it's visible to the admissions committee. In my case I had to because it was in my transcript and NOT explaining the note would be a huge red flag. If whatever you went through isn't obvious from your other application materials, then mentioning it is unnecessary. The admissions committee only skims your application, and something you think is horrible could be something they don't even notice. Can you give me more details about your situation? What exactly happened that was so bad that you're considering explaining it in your application?
**2. If you have to mention it, be honest but brief.**
Don't dwell on your mistakes; a few sentences of explanation are sufficient. You want the committee to focus on your achievements and not your failures, so just mention the negatives as straightforwardly, concisely, and professionally as possible and move on. Don't give more information than you have to and don't get too personal. In your case I would not mention the mental illness unless it's absolutely relevant. I didn't even mention my bad grades; I just explained the cheating. I didn't want the admissions committee to remember TWO negative things about my application.
**3. If you violated a moral or legal code, show remorse and show that you've learned from your mistakes. Demonstrate how you've moved past your old mistakes onto future successes.**
The point of disciplinary action is to teach you a lesson. Show that you have learned from your errors. End on a positive note by mentioning how you've overcome your mistake to achieve your recent successes. Show that you've moved past your mistake and ready to work hard in your dream school's grad program.
**4. Submit the explanations document separately from your personal statement if possible.**
Your personal statement should be overwhelmingly positive and confident in tone, and anything negative in it will be jarring to the reader. If there is a separate section in the application for you to submit this explanation document, then do that. If they don't offer a separate section, contact the school and ask.
**5. If there's someone with clout who can write you a recommendation letter, ask them to explain it for you.**
This person should be pretty influential, though -- it should be someone the admissions committee can trust. In my case I got a professor who was on the admissions committee of the school I was applying to to write me a recommendation letter. I had taken a class with him as a visiting student, and when I asked him for advice about explaining my academic violation in my application, he offered to explain it in his letter. This is something I would do only if your letter writer is someone the admissions committee trusts more than they trust you.
Admissions committees are people and they're willing to forgive you. They were students once too.
Congratulations on your recent successes! Overcoming mental illness to graduate with honors is an amazing achievement that you should be very proud of.
I hope I've helped. Feel free to contact me privately to discuss your situation in more detail, if you'd like. I spent a lot of time and met with a lot of professors to figure out how to address this in my application. I'd be happy to share what I've learned.
I should mention that I'm in the US and studying computer science. My academic violation was in an art history class unrelated to my major.
Upvotes: 4 <issue_comment>username_4: Your story seems very simple and straightforward:
1. You had an undiagnosed and serious medical problem.
2. After diagnosis and treatment, the problem is fixed.
Your grades are the clear evidence for point 2, and in the unlikely event that the admissions process asks for proof of your medical diagnosis, that should be easy to provide.
There are lots of people who are taking life-long medication for a wide range of conditions, and that doesn't prevent them having successful lives and careers. Don't get stressed out just because you are another one of them.
Upvotes: 3
|
2015/05/09
| 801
| 3,430
|
<issue_start>username_0: I submitted a paper to a conference, and got it accepted. While I was preparing for the camera-ready version of it, I made a mistake and an error slipped in.
After the firm deadline, I asked if a correction was possible, but they said it's too late since it had gone through all the procedures. So, it seems my conference paper will be uploaded online with an error, which is not too obvious, but obvious enough that a careful reader can spot it.
I'm writing a journal version of it, and making sure it doesn't contain any error. Would it be okay to have the journal version published error-free, while the conference version has an error?
Many say that no one actually carefully reads conference papers as they are mainly aimed to let others know new results, and say that it is journal papers that others read with more care if available.
This is my first published conference paper, and it is really embarrassing. I was obssessed with finding typos, and blind to technical errors.
Thanks.<issue_comment>username_1: While there will certainly be a mistake in the version published with the proceedings (i.e., on the USB stick or other media given out at the conference), it is likely that it can still be corrected in the final archival version. If your conference is associated with a professional society that maintains archival versions (e.g., IEEE, ACM, AAAI), check and see if the same mechanisms for handling errata on a journal article post-publication can be applied to errata on a conference article post-publication. I had much the same thing happen to me with one of my early conference articles as well; it was in an ACM conference, and though dealing with the errata was a pain, I got it through and the version you download today should be correct.
---
Correction: per the OP's comment apparently [the IEEE won't do it](https://supportcenter.ieee.org/app/answers/detail/a_id/172/~/i-found-some-errors-in-a-paper-in-ieee-xplore.-how-can-these-errors-be), which seems problematic and is news to me. In that case, the best thing to do is probably to just
1. Make sure it's correct in the journal version (and include a footnote in that paper that explicitly notes the error in the prior conference publication), and
2. Post a note alongside your self-archived pre-print giving the errata as well.
Upvotes: 2 <issue_comment>username_2: The reality is that the best answer is simply "get used to it". It is sad that this happened to you in your first conference paper, but things do get published with mistakes, and there is really nothing very much anyone can do about: you can proof read papers as often as you want, and there will always be mistakes. I'm not saying this because I am a nihilist but simply because I'm pragmatic: of course I want papers to be perfect; it just isn't practical.
The interesting question is only: how severe is the error. You don't answer this in your question above, but if it is really only a typo or misspelling, then it would not be worth your sleep. If it is a mistake that a reasonably educated reader will be able to understand as a mistake, the same would probably apply. If you submitted a proof that has a mistake and upon further thought you realize that the whole theorem is not true, that would be a separate matter -- but you wouldn't introduce such an error while dealing with the final version of the paper.
Upvotes: 4 [selected_answer]
|
2015/05/09
| 459
| 1,830
|
<issue_start>username_0: I am from India and we have two different degrees. One would be honors and other would be pass. I graduated with a Bsc(pass) degree. Now this degree included studying Physics, Mathematics and Computer Sciences at breadth. An honors degree would require me to complete a depth requirement in only one subject.
Now as I was looking for the eligibility for a masters' degree, I am finding it difficult to find the equivalent of my degree in UK.<issue_comment>username_1: I think this is going to vary from university to university and course to course. A good starting place might be the [international qualifications required for Cambridge](http://www.graduate.study.cam.ac.uk/international-students/international-qualifications) and [the University of Warwick](http://www2.warwick.ac.uk/study/international/admissions/entry-requirements/#i).
Upvotes: 0 <issue_comment>username_2: All UK universities use a Government body to verify and calibrate overseas qualifications for the purposes of admissions and employment etc.
Their web site is [www.naric.org.uk](http://www.naric.org.uk/).
It is their rating of your qualifications against an equivalent UK degree that will be used by any admissions tutor of any Master's course you applied for.
I quote from NARIC:
>
> UK NARIC can provide two officially-recognised documents. The first is a Statement of Comparability, which will include information about the standards of your awards in comparison to UK qualifications. It confirms the status of overseas qualifications and their comparable level in the UK, irrespective of it being an academic, vocational or professional award. It is used by universities, colleges, employers and Government departments and agencies, forming part of their decision-making process.
>
>
>
Upvotes: 2 [selected_answer]
|
2015/05/09
| 3,973
| 17,337
|
<issue_start>username_0: What do you think about the addition of animations to powerpoint presentations? I'm not talking about the generic animations that can be added to powerpoint slides, but the use of animated films to illustrate a concept (I'm in biology by the way).
Despite going into the sciences, I've always had a love for the visual arts, and have been teaching myself animation for the past little while. I think it can be used as a great way of communicating scientific concepts, and will also likely make me stand out in a room full of graduate students. However, I'm not sure if this could possibly back-fire and make me seem less serious about the science? The type of animation I plan on making would hopefully be on par with ones made by professional medical animators.
I'm also a little unsure of the intellectual property aspects of this process. I have made short animations for my presentations before, and although people were quite impressed, my old supervisor seem to think that she (along with everyone else in the lab) could simply start to use my animation in their presentations. I didn't have a problem with this arrangement since it was my first animation, and I definitely was not expecting to be paid for it. However, as I'm starting graduate school and entering a different lab, I would be a little hesitant to agree if it were to happen again, now with animations I've spent a considerable amount of time working on (especially if I will not be the first one presenting it since I'm a new student). I'm currently thinking about showing an animation I made to a PI that I hope to work with in the end (made specifically to illustrate her research). Although I want to impress this person who I think is a great scientist, I also don't want to be taken "advantage" of... Would it be unrealistic to be paid for some of my work during graduate school?<issue_comment>username_1: Unless the animation serves a specific pedagogical purpose, DON'T. It will usually be more distracting / cheesy than useful. As with website animation, less is more.
Upvotes: -1 <issue_comment>username_2: I'm not in the biological sciences, so take this with a grain of salt.
I'm assuming you're referring to an animation that is something analogous to a technical drawing, giving a visual illustration of a physical process or something similar. Animations can certainly be a great way to convey this kind of information, if well done and appropriate; I've seen animations that helped me visualize processes in ways that I couldn't before. I don't see why they should make you seem less professional, unless they are unprofessionally done. It's a good idea with any presentation to practice it first with a friend or colleague, and as part of this, you could ask them for feedback about the animation.
One thing to keep in mind as you go along is that if making animations is time-consuming, you have to consider whether it is the best use of your time. In research, there is always an issue of balancing your time and other resources between conducting new research, and disseminating work you have already done. Creating animations for a presentation would fall into the latter category, and so you want to ensure it doesn't take so much time that it interferes with your other projects. Helping you strike this balance should be part of your advisor's role as a mentor.
Regarding intellectual property: in general, in academic research, once you have shared your work with the world by publishing or presenting it, it no longer really belongs to you. People will share it, quote it, extend it, and generally use your ideas and work to enhance their own. That is how research progresses. You don't really get the right to stop them. (You may have this right in law, but trying to assert it would be harmful to your standing in the academic community.) What you can expect in return is *credit*.
So if you create an animation and share it in a way that makes it possible for others to reuse it (by distributing a movie file, posting it on YouTube, etc). you should expect that others may use it in their own lectures and presentations, attributing it to you ("Animation by Cornyvita"). To assist with this, you may want to include your name, date, and affiliation somewhere in the video. This isn't people taking advantage of you - this is you making a contribution to the academic community.
If you don't want other people to use it, don't share it in a form that makes that possible; or explicitly tell them "please do not share/reuse this". However, unless accompanied by a good reason (e.g. "this is unfinished", "it still has errors", etc), this will likely come across as selfish.
It is definitely unrealistic, and unreasonable, to expect other researchers to pay you for the right to use your animations. People don't have the funds for that, and it would be rather contrary to the spirit of academia.
Upvotes: 4 <issue_comment>username_3: >
> What do you think about the addition of animations to powerpoint presentations? [...] the use of animated films to illustrate a concept (I'm in biology by the way).
>
>
>
There are lots of situations where animation can help much better than verbal descriptions only to show what happens. I'm chemist, so I immediately think of an animation showing (bio)chemical reaction mechanisms.
I'va been using animations of data point clouds of time series measurements: animations work most intuitively if the explained mechanism works actually in a time domain.
On the other hand, I usually avoid animation *movies* in a presentation.
Having a number of slides that evolve and can be played almost like a movie by going fast through the slides allows better control of where to stop in order to give explanations and also of the time taken for the whole animation: this gives degrees of freedom that help to stretch/squeeze the presentation in order to finish on time.
Bottomline: I'd recommend to think hard whether a good (static or evolving) illustration isn't more suitable for the oral presentation than a movie-like animation. But I somehow assume that you like to do one almost as much as the other.
>
> seem less serious about the science
>
>
>
This won't be an issue if there's good scientific content in the animation. Particularly if meant for an oral presentation, it should be very much to the point and it should have a clear "added value" over a verbal description with the aid of a few static illustrations.
---
Now about the intellectual property part of the question. Obviously, this depends on your legislation (and on your actual situation).
>
> my old supervisor seem to think that she (along with everyone else in the lab) could simply start to use my animation in their presentations
>
>
>
Situations exist (in my legislation: Germany) where the supervisor does have the right to do this: e.g. if you produced the animation as part of a paid full-time employment contract. In your legislation, it may even be legal to ask the student to sign over the copyright for all they produce during their thesis to the university (in Germany it is *not*).
On the other hand: in 3 out of 4 institues where I have been I was responsible for my presentations. If a PI wanted to use one of my slides they asked me (and I was of course happy to allow use), but there was no default unasked use. Insitute no. 4 has slides given to the suüervisors/director by default - but in practice they anyways ask for a slide being prepared for them for a particular purpose when they need something.
In any case, anyone showing your animation needs to attribute you as the author.
---
There is nothing wrong with asking to be paid for producing animations. But you should expect that it doesn't work out. Technically it could be done with a student employment contract which would mean that the animation is then a work made for hire (and the employer gets the copyright including the right to tell you to *not* have a "cornyvita's animation" line in the movie). I'd think it likely that money is scarce, though: PI usually won't have money to hire you for producing animations. Tons of other things are more urgently needed.
I therefore recommend that you make up your mind why you want to do the animations (hobby or paid work). And, if you produce them in what "currency" you'd like to be paid: citations? being known? money (hint: there may be faster and easier ways to get this. Being a professional illustrator is not generally known as the fast track to become a millionaire...)?
But: Don't underestimate the value of being known as the author of those really great scientific animations. This can translate to employment later. While super-fancy presentations won't get you hired if you suck scientifically, being a good scientist *and* being known to deliver good presentations is a hard to beat combination. Note that the presentations are the icing on the scientific cake: don't neglect your science for the animations. Also, this means that preferrably *you* are showing your animations on conferences. Second best option (in addition?): your PI attributes you explicitly as the author ("This is illustrated by cornyvita's great presentation here" ) rather than just among all those names on the acknowledgements slide at the end. Just like the PI can distinguish other important contributions "cornyvita isolated the protein, and ..." costs only few milliseconds compared to "we isolated the protein, ...)
The only situation that comes to my mind where I think there could be money for producing animations in the usual university settings are projects for producing teaching material. Some projects also have a bit of a PR budget but my guess is that a single animation will easily cost the whole PR budget for a project...
Upvotes: 3 <issue_comment>username_4: Nate and cbeleites have handled well the finer points of the advisability of making the animations. I would encourage you to go ahead with them, but there are a large number of detail-dependent considerations that need to be made. I want to weigh in on the intellectual-property aspect of your question.
>
> Would it be unrealistic to be paid for some of my [animation] work during graduate school?
>
>
>
In a word, **yes**. You are already being paid to produce research and its corresponding dissemination materials. Asking your group to pay extra for a specific class of dissemination material will strike people as odd and probably as very selfish. I think it is unlikely that anyone will agree to this and you risk alienating your group and the rest of the community.
In addition to this, I do not think you should attempt to limit the spread of your animations. Instead, **get ahead of the market**, and engineer a situation where (i) people will want to share and use your animations, and (ii) people using your animations will bring credit to you and your group. More specifically:
* Make sure all your animations identify you as the author, and your university and research group. Make this visible enough that viewers will see it and have enough time to register it, but discreet enough that users of your animations will not be tempted to remove it or skip over it.
* Keep your animations publicly available in a visible and discoverable place online. This can be a YouTube channel, GitHub repository, university webpage, something else, or some combination of the above. Make it easy for people to find the animations *from you*, instead of getting *n*th-hand copies from someone in their lab.
This online space can then grow to include, say, your research papers, potentially turning some of your animations traffic into interested readers of your publications. This gives you as direct a benefit to your career from your animations as you can really hope for.
* Provide clear licensing details for your animations, which could be [CC-BY](https://creativecommons.org/licenses/by/3.0/) or [CC-BY-ND](https://creativecommons.org/licenses/by-nd/3.0/). Make this clear but concise and easy to follow. It is up to you to gauge your audience, but I would claim that people are much more likely to use your content in ways that you would like if you (1) actively give them permission to use it, and (2) clearly state the ways you would and would not like your content to be used. If you provide your animations under CC-BY and state as the attribution requirement that people provide a link to your repository, you are creating a contract of sorts between you and those users, which adds a barrier on *their* side of the channel (they need to actively breach the contract to misuse the content).
Moreover, if you do this early then you can get ahead of the game and set clear rules for how you want your own research group to treat the animations. Make it easy for people, and particularly for your own group, to know how to credit you, and they are very likely to do it.
Academia is a weird place, and there are many ways to be compensated for work which are not monetary. (Think, for example, of journal editors and paper referees, who perform their jobs on a voluntary basis, in the understanding that they are service activities that actively contribute to their standing in the community, and therefore indirectly to the visibility of their research, their avenues for further collaborations, and ultimately them keeping their day jobs.) You should not expect direct financial compensation from your animations, but there is no reason why you can't market on them and use them to further your career. You just need to be somewhat more clever about it.
In this sense, the situation admits a clear analogy to the (sad) state of the music publishing industry over the past twenty years or so. People are going to pass around your animations, whether you want it or not, in much the same way that people are going to share music illegally if it is the cleanest way to obtain it. The music industry we have spends all its resources trying to clamp down on this, with little effect beyond alienating people; don't be like them. Instead, be like the music industry we wish we had: actively trying to harness the new technologies and media to generate new, fresh, legal ways to get the content, that actually make you *want* to use them. This is what I mean by beating the market.
Upvotes: 5 <issue_comment>username_5: You've got some great answers already, so here's something shorter and more general.
A well-chosen, well-executed animation (which can actually be quite simple) can really help a presentation. There are (at least) 2 caveats to this, which if you bear them in mind will help you make best use of your time and make a good impression:
* You're not the best judge of how your animation (or even the whole presentation) comes across.
* You'll need to rehearse the presentation more. Narrating over the animation is more akin to an actor learning lines than to normal scientific presenting. You need to get the timing spot on and be confident. If you do this at/near the beginning of you're talk you'll be well set to carry on doing it well.
Overall these mean that it's more important to get other people's input. You might want to sketch something out with a friend working on something related, then with your PI, before putting lots of time in on the animations. Then rehearse in front of the research group or perhaps some other postgrads from the department. You have some control over where your work ends up, but assuming it's good, expect to share with the rest of the research group and collaborators. Wider sharing is up to you after discussion with your PI.
Upvotes: 2 <issue_comment>username_6: There are two types of animations present in presentations: Those that should be there and those that shouldn’t. Luckily for you, if your supervisor wants to use your animation, you can consider it to be of the first type.
The difference between the two is pretty simple. Good animations are those that illustrate scientific facts or research. Thinking of a biological example, say you’re discussing the formation of a multi-protein complex and have evidence to suggest that *A* meets with *B* first, *B* then phosphorylates *A* which leads to the association of *G* etc. … It’s a long story that you can present in a series of pictures or a ‘simple’ animation. Another one would be a protein crystal structure that you animate to be shown from different sides so you can discuss different things about them.
The bad ones are the ones that serve no further purpose. Although I’m a chemist, I’m on a campus full of biologists. During our last retreat, one presentation discussed something to do with bats (I think). About every second slide had an animated picture of a bat flying in and flapping around. I think even Batman appeared once. None of these bats served any purpose whatsoever in the flow of the talk. I only remember them because they were irritatingly out of place. Do not include this kind of animation in your talks.
That is not to say a flapping bat is a big taboo. Rather, if you’re discussing how bats fly when compared to insects and/or birds, an animation of a flapping bat can well serve a purpose. But if you’re discussing how *Pseudomonas aeruginosa* infects bats, it’s out of place.
Upvotes: 2
|
2015/05/09
| 1,675
| 6,837
|
<issue_start>username_0: I am an undergraduate student who got into a voluntary internship for a year in my department.
Without going into much detail, I'm a trainee in the lab where programming lessons are held with lots of computers available for the students(Teaching Lab).
The main problem of concern is that my supervisor, who also happens to be the administrator of the lab, does not seem to believe in my abilities and underestimates my intelligence, which I can't understand why.
When I signed up for the internship I submitted my resume, followed by a short interview from the university. I can say I got the job easily mainly due to my background.
I was hoping that I was going to work on something productive, like a research project. On the contrary, my supervisor seems to be very lazy and constantly assigns chores to me (cleaning, being his personal mailman on the university etc).
At the start since he found out I was bored he suggested to study some basic HTML (Fact is, I worked as a professional web developer some time ago!). If I reply that the tasks he sets for me are easy, he gets mad and tries to get me wrong so that he can show that I have no idea what I'm talking about.
Professors who knew me before and visited the lab asked my supervisor to offer me some motivation for extended bibliography and some more productive learning. He answered that if I have the right foundations then we can negotiate for something more productive! (Yet, he only has one paper, published 20 years ago).
What are your opinions on this? I was about to withdraw due to boredom but I only have some months left. Should I talk to a professor about this topic? Am I expecting too much or am overqualified?<issue_comment>username_1: While your situation is tricky, from your description it sounds like you have a bit of an ego which is not helping you.
One sure shot solution to the problem is to **let your actions speak for you**.
For instance, try to apply your web development skills when your supervisor expects something done in HTML. Try to show how meticulous you are when you are given chores (I know it can get boring but may be you will rise in his eyes if you do chores in an impressive manner). Find out what he is doing and has done in the recent past and see if you can act on something that helps his work.
Upvotes: -1 <issue_comment>username_2: I don't say this lightly, but given everything you've said: **I think you should probably quit the internship, or at least present the prospect of that to your supervisor**.
Some key points:
1) You say it's a "voluntary internship". Well, all academic internships are voluntary (I hope!), so I think what you mean is that you are a *volunteer*, i.e., unpaid. [**Added**: I just looked this up, and apparently this phrase is quite common in parts of the anglophone world outside of North America. Sorry.]
2) You wrote
>
> I was hoping that I was going to work on something productive, like a research project.
>
>
>
That is a very reasonable expectation for a volunteer internship. However, the parameters of the job seem to be very different:
>
> I'm a trainee in the lab where programming lessons are held with lots of computers available for the students.
>
>
>
But that doesn't sound research-related at all: it sounds like you're in a **teaching lab**, not a research lab. If you were actually doing the training, you'd be some kind of TA...without pay. Unfortunately:
>
> On the contrary, my supervisor...constantly assigns chores to me (cleaning, being his personal mailman on the university etc).
>
>
>
Having to clean up a laboratory space *after using it* is very reasonable. In my branch of the academic world at least (North America), cleaning up after other people is a paid job, not part of an internship. Similarly, mail delivery is the sort of thing for which someone is usually paid an hourly wage. If you are doing some of this and some of something else, it might be okay if the something else were especially attractive and rewarding. But given that you're not, it sounds to me that you're simply being exploited. Certainly I would feel that way if I were you.
>
> Yet, he only has one paper, published 20 years ago.
>
>
>
Yikes. So the professor is probably not even **research-active**, or at least not to the level necessary for it to be plausible that he is the head of some kind of research team. He is not a good choice to supervise your research. In view of everything else you've said, I'm afraid that it seems likely that the business about your having a "proper foundation" -- especially in the context of his willful ignorance of the skills that you already have -- is just an excuse.
I would go to this professor and say that there's been a misunderstanding. You thought you were getting involved in a research internship, and as it hasn't panned out that way, you'd like to give notice. If he wants to change your mind, have him mention not just the prospect of future research "when you're ready", but actually nail down research that you can get started on right away.
It would also be good to speak to at least one of the other faculty members you've mentioned. I don't know where in the world you're writing from, so it's possible that your local academic culture is very different from mine. But unless you find out that it would be a big bridge-burning mistake to quit your internship, I think you should be angling for that outcome.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Your relationship has degenerated into mutual provocation, it sounds like. Not a healthy situation. Just try to get out of there as gracefully as possible. If you were being paid, it might be somewhat different. But in an unpaid internship -- well, it's reasonable for you to make yourself useful to some extent; but you should be learning and growing.
You might be interested to read what the U.S. government considers the definition of an unpaid internship to be:
1. The internship, even though it includes actual operation of the
facilities of the employer, is similar to training which would be
given in an educational environment;
2. The internship experience is for the benefit of the intern;
3. The intern does not displace regular employees, but works under
close supervision of existing staff;
4. The employer that provides the training derives no immediate
advantage from the activities of the intern; and on occasion its
operations may actually be impeded.
See <http://www.dol.gov/whd/regs/compliance/whdfs71.htm>
---
Edit
>
> Should I talk to a professor about this topic?
>
>
>
Yes, that would be fine, as long as you can avoid whining or complaining. If you're past the point of talking about the problem calmly then it would be better to leave as soon as possible.
Upvotes: 3
|
2015/05/10
| 1,961
| 7,442
|
<issue_start>username_0: I just noticed the following words from arXiv:
"You are encouraged to associate your [ORCID](http://orcid.org/) with your arXiv account. ORCID iDs are standard, persistent identifiers for research authors. ORCID iDs will gradually supersede the role of the arXiv author identifier."
I thus wonder if a person's profile in ORCID is really that important, given that the person would like to pursue a career in academia?<issue_comment>username_1: My personal opinion is that right now ORCID has pretty much zero traction in academia: when my university partnered with ORCID and pushed uptake quite hard internally (by its standards, at least) I was the only person in my department who'd even heard of it (whilst nearly all knew of, and used, the arXiv).
I think it likely that ORCID take up will increase, but in the short term it's of no importance at all.
**Update**: [Here](http://repository.jisc.ac.uk/6025/2/Jisc-ARMA-ORCID_final_report.pdf) is a JISC report on the trial implementation of ORCID at a number of UK HEIs, including my own. It lists a number of benefits (the key ones covered in @username_2 's answer), but more importantly illustrates the high level (research council etc) support for ORCID. With my sceptical hat on I will note that I couldn't find numbers as to how many new ORCID IDs were generated by this pilot.
Upvotes: 4 <issue_comment>username_2: While I largely agree with username_1's answer (ORCID might matter in the future, but doesn't right now), I see one place where it may already matter and a reason why it should come to matter more in the future.
Right now, there is a strong implicit presumption of the uniqueness of a scientist's name, and all of the literature searches and citation databases, etc, of the world get rather confused when you have a person who either a) shares the same name as other practicing scientists or b) has a name that changes over time (e.g., marriage, gender identity change) or is represented differently (e.g., transliterated) in different papers.
On this site, we have a number of good, difficult questions dealing with the problems that [name change and transliteration](https://academia.stackexchange.com/search?q=name%20change%20is%3Aquestion) cause, which is particularly acute for academics in countries that don't use the Latin alphabet (e.g. [this excellent question](https://academia.stackexchange.com/questions/44310/names-in-my-language-used-in-publications-are-inconsistant-should-i-worry-about)). These problems will grow in importance as the number of practicing scientists grows and becomes more diverse, and as the duration of the readily searchable literature grows as well.
In short: there is a rapidly growing need for *something* like ORCID that makes it easy to distinguish scientists without context-sensitive text mining. Whether ORCID is that thing, and how long it will take for it to be widely adopted and effective, are both open questions.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Let me count how many academic IDs I have:
1. [Google Scholar](http://scholar.google.com/citations?hl=en&user=TuJeDtcAAAAJ)
2. [ORCID](http://orcid.org/0000-0003-2977-7326)
3. [ResearchGate](https://www.researchgate.net/profile/Stanislav_Kolenikov)
4. [RePEc](https://ideas.repec.org/e/pko3.html)
5. [ResearcherID](http://www.researcherid.com/rid/F-8341-2010)
6. [Microsoft Academic Research](http://academic.research.microsoft.com/Author/23447997)
7. [Scopus ID](http://www.scopus.com/authid/detail.url?authorId=6507422527)
They never agree on the counts of my citations, of course, or even on the number of published works, even though some of them cross-link to one another.
Google, Scopus and Microsoft are, of course, automatic. ORCID, ResearchGate, RePEc, ResearcherID require registration. I don't have arXiv ID; I've never been able to extract value from that service, and at the time I bothered about these IDs, it was purely mathematical, which did not quite ignite me, and it failed to compile my LaTeX code because I had graphs in my papers that exceeded the then-existing file size limits (something like 1 or 2Mb in early to mid 2000s?). I am sure I am missing another five or so academic ID services. I remember starting some of them, but never being able to populate them properly, because they said something like: "Oh, just push this button, and we'll search ISI WebOfKnowledge for you" -- guess what, they don't do it on *their* end, they do it on *your* browser, so you need to have access to ISI for that stuff to work.
Does missing any one of these services matter? This is probably discipline-specific; I would venture a guess that nobody in this thread heard of RePEc (REsearch Papers in EConomics)... but if you talked to economists, they probably would not have heard of ORCID in return. Given that you are coming from arXiv, you are probably a mathematician or a physicist, and arXiv is probably the default for these disciplines, with little to no need for other IDs. That is to say, if you only care about becoming a famous physicist through publications, stick with arXiv ID; if you want to be a good citizen of academia, so that other academics sort of recognize you when you step out of the door of your home department, or if you reasonably expect to collaborate with other disciplines, ORCID will probably help.
Coming back to the OP question, my specific suggestion would be to check what people put on top of their CVs in your discipline/in the departments where you are looking for a job. If it is an ORCID, you need to get one. If it is an arXiv ID, you need to get one. If it is a picture of their chihuahua, you... well, you get the drift. (My scanning of CVs in a discipline that has a sort of dual identity, sociology, when I was interviewed by a sociology department years ago, revealed that some of the faculty think of themselves as *scholars*, and list their published works under *Scholarship*, with these works being books or chapters in books, with few to no papers -- these would be doing sociology theory or some social forces or stuff like that; while others would refer to themselves as *researchers*, and list their published works as *Research*, which would consist, for the most part, of journal papers, which would be more likely to use quantitative methods. I don't know much about sociology, but I imagine there are internal tensions between these groups, with one not understanding the approaches and achievements of the other.)
Upvotes: 3 <issue_comment>username_4: NSF has just thrown out it's CV/resume system for grant proposals and adopted [ScienCV (NIH's sysetm)](https://www.ncbi.nlm.nih.gov/sciencv/), roughly speaking. This system can, with a fair amount of work, mine your ORCID and setup your CV for NSF's grant proposals for the future. If you submit a proposal without using this "new" system, NSF will return it without review. ORCID is probably the easiest way to construct what NSF and NIH want in ScienCV, though I haven't completed mine yet (I've barely started it).
ORCID may never take off as a universal system for finding researchers and their latest publications, but US researchers who want to keep their jobs will be forced to keep theirs sort of up to date so that the granting agencies at the federal level can see them and use them for review. Whatever ORCID does, it still suffers from an/the updating problem.
Upvotes: 2
|
2015/05/10
| 791
| 3,516
|
<issue_start>username_0: Many months ago, I had to drop out from a prestigious university.
Lately, the university contacted me and asked me if I were considering enrolling once again, and said that if yes, I should do so before a certain deadline.
Is it common for universities to contact drop outs?<issue_comment>username_1: The answer may depend on why you had to drop out, but many universities simply consider you to still be their student even if your studentship is disrupted by events beyond your control, and will assume you are likely to wish to continue your education there.
You do not say why you had to drop out, but common reasons for somebody to withdraw or fail include: mental health issues, financial crisis, physical illness, death or serious illness of a close family member, visa problems, etc. None of these indicate anything negative about your potential as a student *even if they caused you to fail all of your classes.* I have known people who dropped out for a wide variety of such reasons, many of whom then returned to finish at the same prestigious institution after their life circumstances had improved.
Many universities thus make it easy for a student who left for such circumstances to return and complete their studies, and that is likely what is going on with your university as well.
Upvotes: 6 <issue_comment>username_2: Many students who drop out eventually return, and being away from the university they may be unaware of the deadline. So they may have simply been sending a courtesy message informing such people of when the deadline for re-enrollment is. There probably isn't any deeper meaning than that.
Upvotes: 5 <issue_comment>username_3: In theory the classes you took should be good forever, and the program would never change. If this were strictly true, then you could take classes for years, a few at a time and with frequent long breaks and still obtain your degree.
Most schools consider students to be students from admission onwards, and they never take them out of the system.
However, degree programs change, requirements change, and the schools must change their curriculum. While they grandfather in older, possibly out of date classes, and older programs, they have to limit them, and the easiest way is by time.
If you start a bachelor's program you can expect to be able to follow it without issue for 4-6 years without change even if the program changes for newer incoming students. If you change degrees you'll have to adopt a new program.
However, if you drop out of school, the school doesn't think of you as a person who will never return. But if the programs and classes are changing they will remind you, even if you haven't attended for some time, that in order to fit under the old program you'll have to complete it by a certain date.
So this is why they've sent you a letter. You are, in their eyes, still a student, eligible for a degree in a specific program, and they want you to be aware that even though you haven't been in some time, if you had intended to complete your education with the program you started then you will have to do so by a deadline.
Not all schools send out such letters.
Upvotes: 2 <issue_comment>username_4: Watch out for phishing; they may not be who they say they are. Don't give the caller/writer any information beyond what you'd give to a stranger on the street, and if you choose to give the issue further thought, verify the offer at the admissions department's public phone number.
Upvotes: 1
|
2015/05/10
| 560
| 1,922
|
<issue_start>username_0: I wish to properly cite the amount of times a paper has been cited according to Google Scholar. However, I couldn't find a proper answer to this question.
The paper in question is:
>
> <NAME>., <NAME>, and <NAME>. "Latent dirichlet
> allocation." the Journal of machine Learning research 3 (2003):
> 993-1022.
>
>
>
<NAME>., <NAME>., & <NAME>. (2014) note that
>
> LDA is widely used (...) and has been cited over 8,000 times
>
>
>
But they cite no source! What is the proper source to cite here?
My first thought was something like this:
>
> LDA is widely used (...) and has been cited over 10,000 since its
> publication, a search on Google Scholar reveals (Google Scholar,
> date).
>
>
>
edit: I e-mailed my advisor and ended up citing Hansen et al. (2014) saying that the paper had already been cited 8000 times. Not really a satisfying answer ... alas ..<issue_comment>username_1: I don't find such citations to citation counts very helpful in reading an article. Assuming such numbers are reasonably accurate, they only represent a snapshot of the popularity of a technique at the time you wrote your article. I would recommend simply saying that it is a popular technique and leaving it at that.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I would not give a formal citation, but I would mention something like this:
>
> LDA is widely used (...) and has been cited over 10,000 times as of July 2020, according to Google Scholar (<https://scholar.google.com>).
>
>
>
Key points to note:
* The URL to Google Scholar is sufficient as a reference. Nothing more is needed and nothing needs to go in the bibliography.
* It is very important to specify the month and year of the search since, obviously, when the reader eventually reads your article the citation count will almost certainly not be the same.
Upvotes: 1
|
2015/05/10
| 2,473
| 10,518
|
<issue_start>username_0: I'm about to finish my 4th semester as a pure mathematics undergraduate. I took one introductory course to programming in my first semester, and another more advanced one lately. Although it was only one course out of several others I took last semester, I spent much more time on the computer than on the math books, and I conjectured that I would probably be happier as a programmer than as a mathematician.
The problem is that I spent most of the past 3-4 years doing nothing but mathematics. Practicing problem-solving, developing critical thinking, establishing deep knowledge in foundational mathematics and growing very good general knowledge about advanced math topics and history of mathematics. Besides, I get top grades in my college, and I do like what I am doing.
In brief, I can see myself as a promising person in the world of mathematics. I am confident that in comparison to most of my peers, I am more powerful in this domain. And this, in particular, is what makes me hesitate about even thinking of switching my major.
Above all, I enjoyed that programming course much more than I enjoyed any other math course that I took (except, perhaps, metric topology and some abstract algebra). And I have a feeling that this is what I should truly go for.
What is the best decision that could be done? Especially that I still have only one year to get my Bachelor degree. If I would switch my major, I'll have lost one year of my life because of the regulations in our college.
Thank you.<issue_comment>username_1: Switching one’s major (or subject altogether) is generally something that should not be done light-heartedly. But there are often valid reasons for doing so and it can take some time to notice these reasons.
Assuming you took some time to choose the subject you studied (which it sounds like you did) there are probably good reasons why you chose it. Probably you like maths (you hinted it be fun), maybe you were just good in maths at school or maybe you just wanted to be one of the cool nerds with glasses. One student at my university started studying chemistry because she found it fascinating and liked it at school. I started my master’s majoring in biochemistry because I liked the work we did in our practical courses and because it sounded interesting.
Once you’re now thinking about switching majors, something from your motivation must have changed. I’m inferring from what you said, that you spent more time (and maybe even had more fun) sitting at the PC doing some coding work. That’s a very positive way to think. The student I was talking about above underestimated the workload she was going to put into chemistry studies. She also failed a practical course (that really wasn’t dramatic; it’s a course that more than 10 % fail in their first attempt including me who’s doing a chemistry PhD now) and finally started spending more time with her other scholarly love, Japanese. In my case, I still (would) enjoy the practical parts but I underestimated the amount of rote memorisation required for passing the exams — I’m bad at that.
Once you’ve identified the true reason why you want to switch, you need to debate whether switching is an appropriate action or not. The student I keep mentioning switched to Japanese at a different university closer to her home. I still think that she just made herself too much work, tried to understand stuff way too deeply at a way too early level. She could have stuck with chemistry if she had tried to learn less (yes, that’s sometimes a thing). I haven’t spoken with her for quite a long time, but I assume she thinks it’s the right choice she made (She did take a long time considering it). In my case, switching majors from biochemistry to inorganic chemistry was essential to pass my master’s.
Now onto you. First off, programming or computer sciences are a lot closer to maths than Japanese is to chemistry (although they both sound similar to some, I am told). They’re not as close as inorganic and biochemistry in my opinion, though. It’s highly possible from my point of view to enter similar master’s programmes with both degrees, especially if you’re looking somewhere where both of them meet. You will also have gained valuable skills that you list in your question — problem-solving, critical thinking etc.
* Do you think that you can fulfill the requirements and achieve the degree your aiming for now?
* Do you think that your final mark might be (significantly?) *worse* if you switch?
* Are there computer science/programming programmes that allow a pure mathematician to start them?
If you answered those three and a half questions with *yes* then don’t switch. If you answered the half question with *no* and the others with *yes*, I would recommend you don’t switch. Only if you answer *no* to all three questions should you *definitely* switch.
---
**The most important note of the whole text:** *Have a chat with an advisor* if grains of doubt remain, e.g. if you’re not sure about fulfilling the requirements of whichever master’s programme you’re interested in due to your non-programming maths bachelor degree.
Upvotes: 2 <issue_comment>username_2: Change-of-interest among mathematicians is more common than many would have imagined. Reasons vary, some decided to do something more applied, some decided to work in the industry for the better pay, some decided to work in a more dynamic environment than the academia...
I can assure you that the time you spent on mathematics is time well spent for your intellectual development and that your advanced math training will be a valuable asset wherever you go. Even though you may not be solving abstract algebra or real analysis problems for your future career, it's the analytical skills that you're equipped with that will make a difference between you and other candidates. Mathematicians, if well trained, develop unique insights to problems and are often highly detail-oriented.
If you want to switch to computer science, there are a number of Masters Programs in computer science that will accept non-CS majors. In fact, if you have taken sufficiently many advanced courses in computer science, you may apply to grad school in computer science right away without going through the (often exorbitantly costly) masters program. Many of my classmates have done that and successfully transitioned into computer science.
Last but not least, speak to your academic advisor and discuss your situation.
Good luck!
Upvotes: 3 <issue_comment>username_3: Good question (+1). Generally, I agree with @username_1's advice (also +1), with the exception of the last sentence - I don't think that a conversation with an advisor is or should be the critical factor. Ultimately, I believe that the **main critical factor** should be your *gut feeling*, of course, combined with *practical considerations*, some of which username_1 mentioned. I think that a math degree, especially undergraduate, provides a **solid foundation** for your future career, regardless of your decisions along the way, such as to continue career in mathematics or switch to more applied areas, such as computer programming, operations research and data science.
Therefore, **my recommendation** is to *continue* with your current program, while testing some specific areas, where possible (i.e., for programming, participating in open source software projects might be a good idea), unless you are *absolutely sure* that you **have** to switch majors. Let me finish my advice with mentioning a popular saying/idiom, which seems to be appropriate here: *"Don’t change horses in the middle of the stream"*. Good luck!
Upvotes: 2 <issue_comment>username_4: I see *Good answers* here.
Adding for those,
I'm a Java Developer working almost 2 years in the industry now. Currently I'm following my Bachelors degree in Software Engineering.
I was one of the bullets in my class in High school for Maths. But *Unfortunately* I choose the path to programming. I know not everyone will agree with the word I wrote *'Unfortunately'* in my last sentence. Personally I am really really worrying about the decision that I dropped my Maths degree.
**Please don't make that mistake!**
There are lot of paths you can choose to continue when you have a Mathematics degree in your hand. (Games; of course comes to my mind, [There are many](https://softwareengineering.stackexchange.com/q/136987/167527))
Continue your Maths degree. After that you can have some industry recognized exams(e.g. Oracle certifications, Microsoft certifications). These exams will not take too much time to finish. As Aleksandr said, you have strong foundation if you have a Degree in Mathematics. That's my opinion for this good question.
Good Luck!
Upvotes: 1 <issue_comment>username_5: Some points to consider:
1) You don't need a CS degree to become a programmer (as Aleksandr says) or to study CS at a graduate level (as username_2 says). There's nothing to stop you from completing your math major while spending as much time as possible on the programming that you like. When you graduate, getting a programming job should not be substantially trickier than if you had majored in CS.
2) Just because you are a promising person in the world of math does not mean you are obligated to fulfill that promise (as dinosaur says). Only do math if it's what **you** want. Many people with great promise drop out of the field at every stage of a career, and that's okay. Some examples:
* Undergrad: <NAME>
* Grad School: <NAME>
* Professor: <NAME> (whose impact on mathematics is in my mind greater as a philanthropist than it was as a mathematician)
3) You can come back, if math is truly where your heart is. I know several people who have turned away, be it towards programming, finance or elsewhere, only to return.
4) Towards (3), programming skills are incredibly useful in many areas of mathematics.
From what you've described, I would follow Aleksandr's advice and complete your math major (it sounds like you might already be done!) with an eye towards developing skills as a programmer. If this is not possible, keep in mind that losing a year (and it wouldn't be truly lost) is a far better option than failing to notice that your current path is a waste. Remember that, even if you stay in the academy, your formal education is only a small part of what you will learn. Start casting forth for what you are most passionate for, and good luck!
Upvotes: 2
|
2015/05/10
| 2,284
| 9,273
|
<issue_start>username_0: Is it possible to submit a paper to a scholarly, peer-reviewed journal without PhD? More specifically, what would be the bare minimum qualifications that would grant one the possibility of the paper being published in a peer-reviewed academic medium?<issue_comment>username_1: Yes. In practice, graduate study is one of the main ways people attain the skills to write such a paper, but a Ph.D. is not a requirement. Qualifications would be a choice of the individual journal, but I don't know of any that have a policy of requiring degree credentials from submitters. For example, quite often graduate or undergraduate students write such papers.
Upvotes: 4 <issue_comment>username_2: Submitting an academic paper for publication (and potentially getting it accepted) does not require any qualifications whatsoever. You don't need a PhD; you don't even need to have gone to college. There are no educational, employment, or membership requirements at all.
That's not to say it's easy to get a paper accepted with no formal training in the field. Learning how to write a compelling paper is much easier if you have a mentor to offer guidance. However, if you can figure out how to do it without a PhD, then your lack of a PhD will not be held against you.
Upvotes: 8 [selected_answer]<issue_comment>username_3: Yes, it's possible to get a paper published without having a PhD: PhD students do it all the time. Submitted papers are supposed to be evaluated according to what they say, not who said it.
Upvotes: 6 <issue_comment>username_4: I am a bit more skeptical than the majority of the answers.
The first reason is more profane:
There are journals with more reputation and some with less reputation. The journals with more reputation have more readers and if you are published in this paper, it will foster your reputation. The direct effect is that *everyone* tries to get published in the most prestigous journals. So the editors get swamped with papers and must choose by priority. If two results are equally impressive, the probability that the more experienced and reputable scientist will be published is very high.
So the standard approach is to contact the journals in descending order of reputation and ask for publication. So, yes, you can be published, but it is likely that you must choose a journal with less reputation and that means that your results are likely to be ignored.
That get us to the second problem:
If you have an impressive result, then the editors will scan your paper more closely. Unusual language (there is a specific lingo you use in papers), strange format (Word instead Latex) and an impressive result from someone who is not known before raises a red alarm. If Anonymous Mathematica claims that "your lack of a PhD will not be held against you", then I say, nope, this is not remotely true. It is very likely that you and your paper are not scrutinized anymore and rejected. If you try to battle the decision, the label *crackpot* is attached to you faster than you can breathe. So you must be extremely careful to publish a paper as amateur. Especially because there are journals which are so...problematic that publishing there will harm your reputation.
The arXiv uses an endorsement system to guarantee that only people from universities and research centers can submit papers; other people are locked out.
If you really have an impressive result: In both cases, arXiv and respectable journal it is unfortunately necessary to contact someone from a university etc. which helps you to get through the barrier.
So, principially yes, but there are barriers. Do not underestimate the problem.
UPDATE: Some information beforehand. The reputation of a journal is measured by the impact factor and since 2005 also by the [h-index](http://www.scimagojr.com/journalrank.php?area=0&category=0&country=all&year=2013&order=h&min=0&min_type=cd). For 2013 Nature has rank 7 in the impact factor with 38.6 and leads the h-index with 829. Science has rank 20 (but is preceded by 9 (!) Nature specializations) and is second on the h-index with 801. So pretty important journals.
Nature offers an [overview](http://www.nature.com/nature/authors/get_published/) over the publication process and in fact the rejection rate steadily increased from 89% to 92% which I would call "swamping".
Corvus now claims that my analysis is glaringly wrong and gives the explanation that other journals except Nature and Science do not have the need to pick between papers, if I have understood him right.
Corvus, it is your claim that other high-reputable journals both by h-index and impact factor do not have the problem with picking a paper and therefore high rejection rates ? Do you stand by your claim ? If my information is outdated or simply wrong, then I will retract it, but I **will** research it.
Glaringly wrong, yes or no ?
JeffE, do you agree that being of member of the Computer Science Department in Illinois (which incidentally built the ORDVAC and ILLIAC and are currently responsible for the LLVM Compiler Infrastructure) with a very long tradition and an own ACM chapter might give a paper some recommendation ?
Upvotes: 2 <issue_comment>username_5: It's not only possible but it is a requirement at many universities that your Ph.D thesis be completely based on peer reviewed papers.
Another issue here is the fact that in science only the research results should matter, not who is presenting those results. If Prof. Dr. X has submitted a paper that is found to be wanting, then it should be rejected. if <NAME> Y who dropped out of primary school, submits a paper containing ground breaking results, then that paper should be accepted. If this were not (in principle) true, then that would mean that science is not conducted in an independent way. Arguments from authority, rather than those based on the merits would to some degree corrupt the scientific process.
Upvotes: 4 <issue_comment>username_6: As an extreme example, here are some groundbreaking pre- or no-PhD discoveries:
* <NAME> developed his [famous integration theory](http://de.wikipedia.org/wiki/Lebesgue-Integral) as a young high school teacher in 1902.
* [Space–time block
coding](http://en.wikipedia.org/wiki/Siavash_Alamouti), a theoretical concept that is now essential to wireless communications, was invented in 1997 by <NAME> as a practicing engineer with a master's degree.
* The [AKS primality test](http://en.wikipedia.org/wiki/AKS_primality_test) was the result of 2002 undergraduate research at the Indian Institute of Technology.
It's clear that such contributions warrant a publication irrespective of the author's status.
Upvotes: 5 <issue_comment>username_7: Of course publications want to be read. There is no doubt that anyone who can submit work that is clear, concise, innovative and readable will be considered. But no matter what an individual's personal experience has been, or even the official policy, it comes down to the work, how it is expressed, and the integrity and professionalism of the scientist author. Somehow, nitpicking others' comments seems to be related more to the desire an individual may harbor to be published- even if it's in an online forum!
Upvotes: -1 <issue_comment>username_8: As others have said: no, a PhD is not required. Remember that Einstein didn't have a PhD when he published his paper on the photo-electric effect, yet it would win him a Nobel Prize.
A requirement for getting a PhD degree is often to publish a paper where you are the first author. So then you don't have a PhD (yet) either.
What's more important is **references**. Even if you have a PhD, but your list of references only is two items, chances are it will be rejected, unless of course you made a Really Great Discovery.
References are important because they show that you studied and are familiar with the work of your peers in the field.
Upvotes: 2 <issue_comment>username_9: If a peer-reviewed journal published a paper on the basis of the letters after the authors' names, then the journal would undermine its own credibility. When you drive across a bridge, or undergo open heart surgery, the engineers' or surgeon's professional registration is your assurance that they know what they're doing; but scholarly publications are supposed to be written in a way that makes them **self-evidently** good — they shouldn't need propping up with letters after the authors' names. Look at a few reputable journals and you'll see that they just print the authors' names (no letters after!) at the top of each article.
Upvotes: 0 <issue_comment>username_10: Yes you can. I still was a MA's student when my paper draft was first accepted. I even attended conferences. Do not let anyone tell you otherwise
Upvotes: 3 <issue_comment>username_11: Another example... found in the *Chronicle of Higher Education* (July 19 issue)
<NAME>, age 13: Her paper was just published in the journal *Pediatrics & Child Health*
It was her 6th grade Science Fair project. The judges suggested she write it up for publication. She did. It was rejected by one journal, but accepted by the second one she tried.
[LINK](https://news.yahoo.com/13-old-girl-gets-study-211803783.html)
Upvotes: 2
|
2015/05/10
| 1,857
| 7,678
|
<issue_start>username_0: As I'm sure many lecturers/professors would attest, one of the frustrations of teaching can be the continuous asking of questions that had a student read their syllabus and/or navigated the online learning site, would have most likely been answered.
I teach a social science class and I spend a lot of time on my syllabus, and even more time on the learning site we use. I have an abundance of extra resources to help students out and everything is neatly organised. Regardless, I still receive countless emails and questions not about the content of the course, but when my office hours are or where my office is, or when is the assignment due, or do they have to attend class, and so on.
In speaking with a number of academics, some of the solutions have been creating assessments based solely on the syllabus, such as a 5% quiz in the first week of class. A colleague of mine who was concerned about students not knowing how to navigate the library created an assessment where students had to go to the library and answer a set of questions.
My faculty is quite strict about assessment tasks though, and I've been informed I can only have a max of 2-3 in my unit, so I'd rather not waste them on a syllabus test.
I was thinking about setting up some online bonus mark quizzes where students who wanted to earn a little extra credit could complete them. They wouldn't be worth much, maybe .5% per quiz, perhaps totaling to a bonus of 2.5% or something similar (my best student typically has a grade of around 94%). They'd be on the syllabus, the learning website, perhaps a research bonus quiz (navigating online databases) and so on.
I will consult with local faculty and check my institution's policies. But beyond that: does this seem like a good idea? What factors should I consider?<issue_comment>username_1: >
> As I'm sure many lecturers/professors would attest, one of the frustrations of teaching can be the continuous asking of questions that had a student read their syllabus and/or navigated the online learning site, would have most likely been answered.
>
>
>
Ha ha! Yes, it really happens all the time.
>
> Regardless, I still receive countless emails and questions not about the content of the course, but [...] where my office is
>
>
>
My students typically wander at the opposite side of the university with respect to my office and when they eventually succeed in finding me, I ask them: haven't you read the location of my office on the syllabus on the course website? I let you figure out what the answer is...
Students simply don't read page-long bureaucratic information.
>
> In speaking with a number of academics, some of the solutions have been creating assessments based solely on the syllabus
>
>
>
I don't like this idea nor that of bonus quizzes.
To solve this problem, this year I've decided to remove the syllabus altogether (no one reads it anyway) and to send updates by email to all the students (e.g. hey guys, the new homework is online and is due by etc.). Every email should contain just one piece of information and have a length of just a couple of lines. As for the office hours, they are by appointment and I give office directions when they ask for it.
Upvotes: 2 <issue_comment>username_2: I'd avoid setting summative assessments (i.e. for course credit) on topics other than the subject material you expect the students to learn. Doing so might come back to bite you, even if you keep the relative contribution to a final grade as low as you are suggesting.
Perhaps you could spend half of your first lecture rhetorically asking the sort of frequently asked questions that get your goat, while at the same time navigating your course webpage, to show where a student will find the answer to your questions. After reading out five or so of the most commonly asked questions, the students will get the message:- the answer is likely to be on the webpage. Look for it.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Not sure if this applies, but any school that would support tangential "assessments" such as a participation/attendance score would surely fund no fault with this second cousin, which would have the fringe benefit of raising awareness of the very sort of policies that such a participation score is likely to be measuring. I have taught at numerous universities that have tried both ends of this assessment spectrum, from read-for-detail "quizzes" in the first week to contract-mimicking "I read it and understand" signature pages to detach and turn in.
I'm contemplating one that forces a detailed perusal: burying a code phrase, Waldo-style, in a random location deep within the syllabus along with instructions to send me that code phrase in an e-mail (to establish their knowledge of my e-mail, at yet another school in which at least two colleagues have nearly identical addresses) after receiving verbal instructions to read the syllabus cover to cover "and do what it asks you to do" on opening day, just to try out such a novel approach.
Upvotes: 0 <issue_comment>username_4: I have indeed given bonus points for questions from the course syllabus and it seemed to resolve the problem that you described (although I never did a formal before-and-after test, I did get a subjective impression that after I implemented this, I was much more rarely bothered by students asking questions on what is clearly on the syllabus).
However, I implemented my system with some slight but very crucial differences from what you described, that seem to have resolved your concerns:
* First, some background: in many of my classes (mainly for undergraduate students), I begin the class session with a reading quiz of around 5 multiple choice questions. These are low-level comprehension questions based on the assigned reading simply to make sure that they read the assigned material. I do not ask any tough questions that require deep understanding--only very surface questions with the aim that if they actually read, they should get 4/5 or 5/5, but if they didn't read, they should not be able to guess more than 1 or 2 correctly.
* To this reading quiz, I add a bonus question taken from the course syllabus (e.g. when are my office hours; or what is the deadline for the 1st milestone of the semester project). With this bonus question, there are 6 questions in the reading quiz.
* I still score the quiz out of 5. So, if they got 4/5 of the regular questions plus the bonus syllabus question correct, their score is 5/5. (If they get 5/5 + the bonus correct, I still give them 5/5, not 6/5.)
The advantages of this system:
* I do not dedicate any extra time, whether as a dedicated assessment or class time, to getting them to read the syllabus. I simply inform them of the bonus questions from day one and then the syllabus assessment is built into whatever assessments I've already designed.
* Because I give my regular quizzes throughout the semester, students are motivated to read the entire syllabus early on in the semester and they are also motivated to refresh themselves on the details all the way up to the last regularly scheduled assessment.
All that said, as you have noted, the idea of assigning course credit points for syllabus content is controversial to some people. However, I have no problem with it. I firmly believe that students who are well aware of the syllabus learn more in the class, because the syllabus is designed to maximize their learning experience. So, I believe that giving bonus points for knowing the syllabus content indirectly but very meaningfully increases their learning, and so is well justified.
Upvotes: 0
|
2015/05/11
| 857
| 2,733
|
<issue_start>username_0: For my bachelor thesis, I have to reference a lot to the various C++ standard documents, be it the original standard or some of the working drafts.
How do I cite such documents? Many of the working drafts don't even contain a
title page to extract information from.
Working drafts are available at <https://isocpp.org/std/the-standard><issue_comment>username_1: I would cite the current version of C++ standard, based on the **APA Style** (6th edition), in particular, based on the [APA guidelines for citing electronic sources (Web publications)](https://owl.english.purdue.edu/owl/resource/560/10), as follows:
ISO/IEC. (2014). *ISO International Standard ISO/IEC 14882:2014(E) – Programming Language C++.* [Working draft]. Geneva, Switzerland: International Organization for Standardization (ISO). Retrieved from <https://isocpp.org/std/the-standard>
**NOTE:** When the current standard will be finalized and published, the citation will have to be updated accordingly by removing "[Working draft.]" phrase and updating the year (i.e., 2015).
Upvotes: 5 [selected_answer]<issue_comment>username_2: One can find nearly every citation for the C++ standards in BibTeX format here:
<http://ftp.math.utah.edu/pub/tex/bib/isostd.html>
For example, here is the reference to the ISO C++98 standard:
```
@Book{ISO:1998:IIP,
author = "{ISO}",
title = "{ISO\slash IEC 14882:1998}: {Programming} languages
--- {C++}",
publisher = pub-ISO,
address = pub-ISO:adr,
pages = "732",
day = "1",
month = sep,
year = "1998",
ISBN = "????",
ISBN-13 = "????",
LCCN = "????",
bibdate = "Tue Dec 12 06:45:55 2000",
bibsource = "http://www.math.utah.edu/pub/tex/bib/isostd.bib;
http://www.math.utah.edu/pub/tex/bib/mathcw.bib",
note = "Available in electronic form for online purchase at
\path=http://webstore.ansi.org/= and
\path=http://www.cssinfo.com/=.",
price = "CHF 351, US\$18 (electronic), US\$252 (print);
US\$245.00",
URL = "http://webstore.ansi.org/ansidocstore/product.asp?sku=ISO%2FIEC+14882%2D1998;
http://webstore.ansi.org/ansidocstore/product.asp?sku=ISO%2FIEC+14882%3A1998;
http://www.iso.ch/cate/d25845.html;
https://webstore.ansi.org/",
acknowledgement = ack-nhfb,
xxISBN = "none",
}
```
You can copy and paste this into your `.bib` file containing citations for LaTeX, which you should be using instead of Word anyway : ) You can use the `natbib` package to format it into whichever citation style you like.
Upvotes: 2
|
2015/05/11
| 605
| 2,652
|
<issue_start>username_0: I have done my bachelors in Electronics & Communications Engineering from India and I am currently working as a software developer with 1.5 years of experience.
I would like to do my masters from Europe but either in Computers or Software Engineering field. Is it possible to change from a field like electronics to software specifically in Europe ( preferably Germany or Netherlands).
I know it is quite common in US but I am not able to figure this out about European universities.<issue_comment>username_1: It is definitely possible in germany, but depends on the university.
But why change?
I studied electrical engineering in my b.sc. and and m.sc in munich.
My masters studies had a strong focus on the software part of electrical engineering (security / machine learning).
So its definitely possible to do a master focused on software with a electronics background in germany.
Upvotes: 0 <issue_comment>username_2: I only know about Germany, and as @deloz already mentioned it is definitely possible.
Most Universities will look at what you have already studied during your bachelor's degree and sometimes require you to take extra classes during or before your master's degree in order to catch up to their standards.
Upvotes: 1 <issue_comment>username_3: I will put a bit of a damper on username_2's and deloz's enthusiasm.
First, there are two kinds of masterks programs in Germany: consecutive and non-consecutive. Non-consecutive programs are open to anyone who meets the general admissions criteria, while consecutive master’s programs require that a student have a degree in the same discipline as the original degree, and enough overlap between the curricula at whatever school the student did their bachelor’s at compared to the bachelor’s at the school with the master’s program. If the difference is too large (more than about one semester’s worth), you can be refused admission.
Secondly, if you are coming from outside the EU, the university must recognize your undergraduate degree as being at a school comparable to the one to which you are applying.
Finally, in the event you are admitted, you may find that you need to get approval from the study advisor of the major you wish to pursue, and possibly have to obtain credit for *each* individual course you want to transfer from *each* individual instructor responsible for the respective courses at the new university. Moreover, if you are missing too many credits, your application for transfer will likely be declined, as there are generally minimum limits for how many credits can be transferred from the bachelor's degree.
Upvotes: 2
|
2015/05/11
| 3,467
| 14,939
|
<issue_start>username_0: I am editing a small scholarly journal published by a learned society. One of our authors, after his manuscript was finally accepted for publication and he had signed and returned the publishing agreement (with copyright assignment), is trying to negotiate after receiving the first proof version.
He insists on adding new content and a new reference to the main text of the article, which I, as editor, will not allow. He is offering to resubmit the manuscript with the new content included, and to wait for the results of a completely new peer review. He is very persistent about including the new content which, I think, has no impact at all on the overall quality or on the conclusions of the paper. The reference is a fresh (2015) self-reference, so I assume that he simply wants to increase the visibility of that other paper.
One of my main problem with the new reference (and the related content) is being to one of the author's own paper published in a predatory open access journal. The publisher is listed on [Beall's list](http://scholarlyoa.com/publishers/). When I pointed at it, he said that his paper was subjected to valid and rigorous peer review and other people's bad experience have nothing to do with the academic validity of that specific paper. I disagree, and as I am responsible for our published content I feel adding the new reference is a risk to our journal's reputation.
It would be a waste of our journal’s resources to disregard that we have the copyright of a manuscript, which we accepted for publication after rigorous peer review. We do not feel that we should comply with what the author wants, and we feel that we should publish the paper even if the author will not approve the final proof.
What do you think about this situation? I have a few other options, namely to reject the paper altogether and to go on with the new round of reviews. Both choices would make me feel being strongarmed. Any advice would be appreciated.<issue_comment>username_1: While you might be in the legal right, I think trying to publish when the author wants to withdraw is likely to be more trouble than it is worth for a small society journal. You probably don't have the resources to get into a protracted battle if the author decides to be *really* problematic (e.g., a lawsuit).
As I see it, there are three reasonable approaches at this point, in increasing levels of assertiveness on your part:
1. If the new content isn't very big (e.g., a paragraph or two and the reference), just let it be; just make the change at the proof stage and let it stop being your problem. If it really does make little difference to the paper, as you say, then this is a reasonable triage to do to get this author out of your hair.
2. Reject the paper and send it through a new round of peer review. It's not too bad a waste of resources, particularly if you use the same reviewers, who will be able to just look at the new version in terms of its differences from the old.
3. Offer the author the choice between a final rejection (no resubmission will be reconsidered) and publishing as-is.
Which you choose probably should be guided by just how willing you are to invest your time and reputation into a battle with this author. Some people are known and repeat-offending bullies, and it's definitely worth standing up to them. Other times, you've got a person who is normally reasonable and who has just gotten a particular fixation on this particular issue and it's worth accommodating them one this one occasion. If you do decide to accommodate the author, however, you need to make it very clear that this is an unusual exception and you will be holding them to much stricter guidelines in the future.
Upvotes: 7 [selected_answer]<issue_comment>username_2: >
> It would be a waste of our journal's resources to disregard that we have the copyright of a manuscript, which we accepted for publication after rigorous peer review. We do not feel that we should comply with what the author wants, but we feel that we should publish the paper even if the author will not approve the final proof.
>
>
>
I would absolutely recommend against this. It may be legally acceptable, but I would consider it an immoral abuse of copyright. There's a lot of debate about whether publishers should hold the copyright to papers and what the costs and benefits are, but I've certainly never heard the view that publishers should hold copyright in order to facilitate publishing the paper against the author's wishes if the author becomes uncooperative partway through the publication process. I'm sure the author did not think of the copyright transfer as meaning the journal could simply publish whatever version it chooses in case of a dispute, and taking advantage of this technicality would hurt the journal's reputation far more than the author's.
>
> The reference is a fresh (2015) self-reference, so I assume that hu simply wants to increase the visibility of that other paper.
>
>
>
If the other paper is unrelated, then this is worth arguing about (not because it's a last-minute change, but because inserting unnecessary self-references is itself problematic). However, I see no good reason to object to adding a last-minute reference if it's actually on topic. Surely the cost of making such a change is minimal, and it might actually be useful for readers.
The additional content is a more subtle issue.
>
> He is very persistent about including the new content which, I think, has no impact at all on the overall quality or on the conclusions of the paper.
>
>
>
I can sympathize a little with the author, since this content might be worth recording in the literature but simply not publishable on its own (too short or too closely tied with this paper). That's a reasonable argument in favor of including it in this paper, even if it doesn't improve the paper's quality or change the conclusions. However, making nontrivial changes to page proofs is certainly disruptive.
I don't see reviewing as a real obstacle if the changes are modest. If the author just wants to add some brief and uncontroversial comments, I imagine you could get one of the original reviewers to look them over very quickly and approve them as unproblematic. Of course larger or potentially controversial comments would require serious reviewing.
If the other last-minute changes would increase the typesetting and copyediting costs, then you could ask the author to cover the increased costs. (That may upset the author, but it's a traditional approach to handling this situation.) You might also keep costs down by adding the material as a "note added in proof" at the end of the paper.
In any case, I'd explain to the author what you see as the problem here. Disruption to the journal's internal processes, increased costs, the potential for an end run around the reviewing process, etc. You might be able to convince the author by pointing out issues hu wasn't thinking of, such as "You might think adding two paragraphs on page 1 is no big deal, but the copyeditor has already spent time optimizing the layout and figure/table placement throughout this long paper. I don't feel good about saying that needs to be redone just because you didn't submit a truly final manuscript when asked for one. I can't run the journal effectively if the staff feel I'm wasting their time or not standing up for them."
>
> Both choices would make me feel being strongarmed.
>
>
>
This sets things up as a power struggle between you and the author over enforcing the rules as you interpret them. That sounds like an unnecessarily stressful and confrontational approach, and also one that is not likely to be persuasive to the author. Your discussions so far have established that the author feels these changes merit an exception to the rules, while you are unconvinced. Going forwards, I'd focus on what the underlying difficulties are, rather than whose will is going to prevail.
Upvotes: 4 <issue_comment>username_3: I just wanted to add a third opinion: it is really not a good idea for you to publish an article against the author's wishes, even though you hold the copyright. If you did this, then at best you would never have any dealings with this author again. (In reality: that outcome must be acceptable and perhaps even desirable for you to contemplate this course of action.) More likely, the author will complain about it to a smaller or larger number of people, creating a bad name for your fledgling journal. For what it's worth, if I heard a story about this from an open access journal published by a regional scholarly society: I would not do business with you.
If I am honest, I didn't find your reasons for not publishing the change to be very convincing. It sounds like you are viewing this as a power struggle with the author that you want to win. You write:
>
> He insists on adding new content and a new reference to the main text of the article, which I, as editor, will not allow.
>
>
>
Why not? You don't really say why you care so much.
>
> He is offering to resubmit the manuscript with the new content included, and to wait for the results of a completely new peer review. He is very persistent about including the new content which, I think, has no impact at all on the overall quality or on the conclusions of the paper.
>
>
>
If in your professional opinion the added content has no impact on the quality of the paper, then in particular it has no negative impact. So what's the problem?
The offer to get a completely new peer review seems like a generous move on the author's behalf. He's saying that he's not trying to sneak anything by you but rather wants to err on the side of going through the process thoroughly, even to the extent of having the publication delayed (surely) or jeopardizing the acceptance of the paper (possibly). But because you feel the change is so minor, you can just bypass this entirely. It seems to be matter of retypesetting the paper.
>
> The reference is a fresh (2015) self-reference, so I assume that he simply wants to increase the visibility of that other paper.
>
>
>
This seems like a rather bad faith assumption. If the author feels that his other
work is relevant, it contributes to the literature to have the citation appear. In general, it can be tricky to get cross references in one's own recent papers right, and last minute additions can be very helpful here: if it's a 2015 publication then it is plausible that the author didn't have the full bibliographic data until now.
>
> It would be a waste of our journal’s resources to disregard that we have the copyright of a manuscript, which we accepted for publication after rigorous peer review.
>
>
>
That's written as if you're conveying information, but to me it sounds like a dogmatic assertion about you've already decided to do. **What** resources would be wasted by retypesetting the paper? Let me quote from another answer which approaches this issue more reasonably:
>
> If the other last-minute changes would increase the typesetting and copyediting costs, then you could ask the author to cover the increased costs. (That may upset the author, but it's a traditional approach to handling this situation.) You might also keep costs down by adding the material as a "note added in proof" at the end of the paper.
>
>
>
In other words: if it is really a waste of your resources, ask the author to financially compensate this waste. Or tell him that you need to accommodate him in the most inexpensive possible way unless he is willing to financially contribute. Either of these is so much more reasonable than publishing a paper that the author doesn't want you to publish. You speak of being "strongarmed" but many authors would regard the behavior you're contemplating as an extreme example of that. Can you not work with the author to come up with an outcome in which neither is strongarming the other?
**Added**: I just read the bit about the added publication coming from a predatory Open Access journal. I think that probably should have been part of the question, since as my answer above indicated, the explanations by the OP didn't really explain the motivation. This does. However, I don't think it really changes my answer, except to say: either you have concern about the added content or you don't. If you do, you *should* get it re-refereed. If you don't, then the fact that the citation is to a not-so-great journal is really not your concern: citations should not be censored (which is not exactly what's happening here, but just to enunciate the principle) by the editor because they are unseemly to the journal.
Upvotes: 4 <issue_comment>username_4: It takes two to tango -and tango it should be, and not [tug-of-war](http://en.wikipedia.org/wiki/Tug_of_war).
You representing a collective entity (a journal), should take the high moral ground here, not the legal one. I would suggest to send the updated paper with a full copy of the other paper that the author wants to add as a citation, to the same reviewers, asking for their help, in the name of science (and anyway, since they know the paper, it should take them little time to judge the worthiness of the added material and of the citation). It is fair also to inform them about your concerns.
If they give the green light, the author's persistence would have been vindicated by the same people you have trusted in the first place. If they don't, you can tell the author that the new version did not pass the test. If then he wants to completely withdraw, so be it. You will have done what a scientific journal should do: give science a chance -and sometimes, two.
Upvotes: 2 <issue_comment>username_5: I am wondering not about the academic view (where clearly you shouldn't publish if the author doesn't want you to publish), but about the purely commercial view.
Up to some point in time a publisher wouldn't care much if an author withdraws. A bit later it becomes inconvenient because it's too late to find a different paper, and a 100 page journal might become an 80 page journal. Next stage is where the journal has been sold and the buyers won't be happy if they don't get the pages they paid for, or where buyers have been promised a great article about a subject and that great article isn't going to be there. Then a stage where the printer charges for the empty pages, and the final stage would be where the journal is printed, and withdrawing would mean to destroy the print run.
At the point where withdrawal causes the journal financial damage, the withdrawer has to take responsibility for this and face the damages. And if he wants to increase the visibility of his other paper by referencing it, he has to accept the risk that after withdrawing his first paper, it may not be accepted for the next issue of the journal.
Upvotes: 0
|
2015/05/11
| 723
| 3,124
|
<issue_start>username_0: My recommender and I know each other through scientific discussions and collaborations. I have never been his student.
He told me that in the online recommendation form they asked him some questions like "Which courses has he/she taken with you?" or "How do you rank the applicant among all your students?" and he does not know what to say.
How should he answer these questions? Should he leave them blank and just write the recommendation letter? Would this not be perceived negatively?<issue_comment>username_1: I would recommend that he fill these fields to the best of his ability, following rather the spirit of the question than the literal letter. The first question he would need to leave blank. For the second question, he might be able to compare you to actual students he did have - if this is not meaningfully possible (e.g., if he is an industrial researcher, who never had any students), then he should leave that one blank, too.
He should then point these questions out in his recommendation letter and explain why he could not enter anything there.
---
The problem of course is that you don't know whether these online forms are first processed automatically, and whether blank fields could count against you without a human even parsing the explanations in the recommendation letter. I would assume that such issues arise frequently and that the people who create such forms are smart enough to understand this. In which case I would assume that blank fields are treated separately and do not lead to an automatic exclusion of your application.
(Of course, it could be that if a "large" percentage of blank fields could indeed be an automatic rejection, since one could argue that the recommender might not even know you well enough in an academic context.)
---
EDIT, in the light of [username_2' answer](https://academia.stackexchange.com/a/45239/4140): I realize that I answered based on the assumption that these questions are multiple choice questions, not free text questions. If these are free text questions, then yes, he should answer truthfully that you never were a student of his (and still go into details in his recommendation letter).
Upvotes: 3 <issue_comment>username_2: >
> He told me that in the online recommendation form they asked him some questions like "Which courses has he/she taken with you?" or "How do you rank the applicant among all your students?" and he does not know what to say.
>
>
>
Well, he should say "kasramsh has never been my student". What else could he truthfully say?
>
> Should he leave them blank and just write the recommendation letter?
>
>
>
He shouldn't leave them blank but answer them truthfully. "I know kasramsh from [...]. He has never been my student.". Somewhere in the letter, the relationship betweem recommender and student should be explained anyway.
>
> Would this not be perceived negatively?
>
>
>
Maybe, maybe not. Anyway, it is the way it is, and there is no way how you can spin it that the target institution would *not* figure out that you are not your recommender's student.
Upvotes: 3
|
2015/05/11
| 640
| 2,638
|
<issue_start>username_0: Is there any technical as well as practical differences between the two academic positions, one being a permanent faculty position in the UK (or Australia/NZ and other similar systems) and the other being a tenured position in the US/Canadian universities?
I believe most things may come down to the instances when they can be fired. While this question ([Would tenured professors who are charged with a crime generally be fired?](https://academia.stackexchange.com/questions/28569/would-tenured-professors-who-are-charged-with-a-crime-generally-be-fired/28570#28570)) addresses this aspect for the US/Canadian system, I don't know a comparison among different systems.
Edit: My question is different than [US statute of Higher Education System](https://academia.stackexchange.com/questions/31873/us-statute-of-higher-education-system) because that question refers to the authorities and rules in different countries that grant professorships. My question is regarding the actual (technical and practical) differences between 'permanent' and 'tenured' faculty members in different systems.<issue_comment>username_1: I think the most important difference relates to "redundancy" in terminations. In the UK, administration can decide to stop teaching a subject at a university, and faculty may become "redundant". There are various protections in the UK such as the need to try to find alternative employment for the staff member etc. but in the final analysis, if they want to stop teaching Vedic Epistemology, the Vedic Epistemologist could end up sacked. With a tenured position in the US, that would not be sufficient grounds for termination (following university rules that I am aware of -- rules are set by each institution, though there is considerable similarity). A correlated difference is that there is a huge tenure ordeal in the US (the "up or out" rule -- either you get tenure, or you get fired), which does not exist in the UK.
Upvotes: 3 <issue_comment>username_2: Apart from the differences due to the educational systems (expectations for teaching, types of classes, recruiting grad students) and funding opportunities/expectations, one difference is that the UK system has finer gradiations of rank beyond the usual titles (Lecturer, Reader, Professor, etc.) whereas most US universities do not. This determines your pay grade and salary increases, which is fixed and not subject to negotiation like in most US universities. In terms of salary in US institutions, merit-based raises take the places of these promotions to higher pay grades, but these are much more fickle in general.
Upvotes: 1
|
2015/05/11
| 821
| 3,513
|
<issue_start>username_0: I am finishing my Master in a foreign country on a student Visa. The Visa will expire in one and a half month, the exact time I was appointed to receive graduation, but my Thesis supervisor wants me to stay one additional month to "finish" the Thesis. Indeed, he says he won't accept it if I don't stay to include additional verification, rendering me unable to fulfil my graduation requirements.
I have asked for support at the International Office of my University, but it seems nothing can be done at this point to extend the Visa. I should have begun the process at the beginning of the semester if I needed an extension. Therefore, I am facing the imminence of being deported, not to mention that: 1. Without a valid student Visa the University can't award me my degree. 2. If I remain in the country, I will become homeless because my student's room contract will end at the same time that my Visa and, as a foreign student, I don't have enough money to rent a room on my own.
I had a tense relation with him after one month on my thesis. He wasn't particularly interested in my Thesis (poor communication, not interested in students benefit but on his project, disappeared for months and then reappeared totally disconnected, etc.), but I remained because of his prestige and fear of losing one complete year of preparation in the field. This is the worst moment in our relation. On his criteria, I have been lazy and put myself in this situation. Either way, I have to solve it now or lose two years of hard work and spent money.
How can I solve such situation? How can I make my a Professor aware that there is bureaucratic world beyond academia, that I don't have the same privileges as a national of his country? Maybe a foreign student had a similar problem and can give me advice or tell his/her experience...
**Edit:** I have to mention that my supervisor has a legal/financial obligation on the project I am part of henceforth comes his worry.<issue_comment>username_1: I think the most important difference relates to "redundancy" in terminations. In the UK, administration can decide to stop teaching a subject at a university, and faculty may become "redundant". There are various protections in the UK such as the need to try to find alternative employment for the staff member etc. but in the final analysis, if they want to stop teaching Vedic Epistemology, the Vedic Epistemologist could end up sacked. With a tenured position in the US, that would not be sufficient grounds for termination (following university rules that I am aware of -- rules are set by each institution, though there is considerable similarity). A correlated difference is that there is a huge tenure ordeal in the US (the "up or out" rule -- either you get tenure, or you get fired), which does not exist in the UK.
Upvotes: 3 <issue_comment>username_2: Apart from the differences due to the educational systems (expectations for teaching, types of classes, recruiting grad students) and funding opportunities/expectations, one difference is that the UK system has finer gradiations of rank beyond the usual titles (Lecturer, Reader, Professor, etc.) whereas most US universities do not. This determines your pay grade and salary increases, which is fixed and not subject to negotiation like in most US universities. In terms of salary in US institutions, merit-based raises take the places of these promotions to higher pay grades, but these are much more fickle in general.
Upvotes: 1
|
2015/05/12
| 369
| 1,454
|
<issue_start>username_0: I have applied to 2 postdoc jobs in the same university and have been selected for a 2nd interview in one before doing the first interview in the other lab.
The interviews are set at different times and so I will not be able to attend both so I wish to cancel the one where I haven't been yet. Should I say to the professor the truth "I will be interviewing with another lab" or shall I just say that for some reason I have to cancel it without mentioning the other interview?<issue_comment>username_1: tell the truth, as you haven't done anything unethical. however, just attending the interview is still a good solution.
Upvotes: 2 <issue_comment>username_2: Title
Name
Address
Dear University B -
Thank you very much for notifying me that I am on the short list for position W at University B and for scheduling me for a first-round interview on February X, 2015.
I have recently been notified by University A that I am a finalist for a position at Lab A and they want to schedule a second-round interview, also on February X, 2015.
As the Lab A request is for a second-round interview, I feel that I must prioritize this against first-rounds, even though I am still very much interested in pursuing the position at your lab.
Would it be possible for us to reschedule my interview for Feb (X-7)? I understand that this may raise logistical concerns and appreciate your consideration.
Sincerely,
User34154
Upvotes: 4
|
2015/05/12
| 1,019
| 4,086
|
<issue_start>username_0: The questions says it all. I have an report I want to write. The level expected is slightly lower than that of a master thesis. I do need to write an introduction to the topic and there is a very recent literature review online. It covers some things that I want to use in my introduction. I do not have the possibility to access all the papers in the review as they are behind a paywall. How should I cite the review or the papers in the review? What is common practice in psychology?<issue_comment>username_1: If you copy the review text from the web into your own thesis it is a clear case of plagiarism.
When you write academic text where there is a need to provide the sources for your information (as references) you need to have read the sources. It is not ok to quote sources without having even looked at them. In very rare cases, may it be acceptable to quote a source quoted by someone else in a publication. These cases may include very hard to find literature or literature in a different language. But, such secondary references should not be used unless deemed absolutely unavoidable.
If you need to cite the on-line review, you will need to look at how web-sources are cited at your school (if they are allowed). you can also look at the recommendations at American Psychological Association (APA; very appropriate for you) and their [style site](http://www.apastyle.org/). You may find many other bits and pieces that are of interest to you.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I'm going to disagree with Peter on this.
Almost anything is acceptable if you are utterly utterly open and honest about it. If you lift sections from the review without proper attribution that's plagiarism but if you properly cite and properly quote a block of text from the review with quotation marks and indentation making clear where it is from then it's not plagiarism.
There's nothing wrong with including a quoted block of text with something along the lines of "and this concept is well explained in A. Bsons review [title] (20xx) " followed by a quote (even quite a large quote) which is properly indented and shown to be a quote so that people are clear what are your own words and what has been written by the other author.
Now for one reason or another the review might be incorrect or a poor choice of source or a source that is unacceptable for some other reason within the bounds of what you've been assigned to do or someone may have specifically specified that reviews must not be used as sources but that's all separate from plagiarism.
To answer in the spirit of my answer:
User woj <https://academia.stackexchange.com/users/15446/woj> on this site once commented that
>
> "I reviewed once a thesis which had a copied/pasted page from a
> publication (with an authorization from the author and clear
> citations) and the author added in a footnote that this is by far the
> best explanation he read and does not see any reason to phrase it
> differently (that was in an introductory chapter). I was very fine
> with that."
>
>
>
As a comment in this page:
[Are you allowed to copy text from your Master's thesis into your PhD thesis?](https://academia.stackexchange.com/questions/41551/are-you-allowed-to-copy-text-from-your-masters-thesis-into-your-phd-thesis/41555#comment92104_41555)
Upvotes: 4 <issue_comment>username_3: Here's my two cents. The OP's question is formulated as "is it plagiarism/OK". Therefore, if someone copies large parts of someone's work and provides *proper attribution*, it is **definitely not plagiarism**, but it is IMHO **strongly not OK**. It is not OK for the following two reasons:
* most academic guidelines (i.e., dissertation guides) *significantly* **limit** the *size* of direct copy/paste materials, so it is not possible to do that without violating those guidelines;
* even in the absence of such guidelines, direct reusing of someone's work represents the (almost complete) **lack** of *academic effort*, thus, making that paper/report **irrelevant**.
Upvotes: 3
|
2015/05/12
| 859
| 3,889
|
<issue_start>username_0: I'm wondering about the best way to handle the following situation. Some time ago I've asked my then-potential referees about possibility to become my reference sources and they kindly agreed, after which I've updated my CV with their contact information.
After some analysis of the academic job market advertisements, I had an *impression* that most institutions (at least, solid ones) do not expect receiving letters of recommendation **directly from a job applicant**, but rather **from referees** (via either e-mail, or online forms). However, recently I ran across at least several institutions (including some well-known ones), which, in their instructions, ask job applicants not only the traditional set of documents (cover letter, CV, samples of research, teaching and research philosophy, transcript, teaching evaluations), but also *recommendation letters* instead of references.
The problem is that now I have to explain this situation to my referees and kindly ask them to send me those letters of recommendation (for institutions, expecting them from applicant) **plus** to submit those letters directly to institutions, which expect that information from referees. Is it a reasonable request? My referees are very nice people, but quite busy, and I feel uncomfortable to bother them more than needed. However, I don't really see any workarounds in this regard. Is there anything else I can do to optimally solve this problem? I would appreciate your opinions.<issue_comment>username_1: Everything will be fine. If they have ever written more than a handful of these letters for others, they will recognize that different institutions have different requirements. Just tell them what you need, and thank them very much for their time and patience.
Upvotes: 3 <issue_comment>username_2: If you are a student or postdoc, your current institution may be willing to take care of this for you. I remember when I applied, the secretaries sent out all the (paper) applications for me. They had my letter writers send a copy of their letter directly to the office, and they made copies (I think with a note) for each application. I never handled these letters myself at all.
If you department does not do this, the usual thing to do for writers to give you sealed envelopes with the writer's signature/intials over the seal to show you didn't open them.
Upvotes: 3 <issue_comment>username_3: When I was applying for postdoc positions, I sent in about 50 applications and researched quite a few more that I wound up not applying for. Out of these many applications, only one or two asked for references' contact info; the rest wanted recommendation letters. In every case the letters were to be submitted directly by their writers, not by me.
The point is that, at least in my corner of academia (but I think in many others as well), when someone serves as a reference for an applicant, it is a standard expectation that they will write a letter on behalf of the applicant and submit it in some form that does not allow the applicant to see it. I would be very surprised if any of your referees/references agreed to fill that role without expecting to write a letter on your behalf and submit it to whatever institutions you're applying to. Of course they won't write a brand new letter for every application; typically they write a generic letter that can be used for many jobs, possibly customizing it a little if they happen to have connections at a particular institution. Once the letter is written, it's very little additional effort for them to submit it to each new application.
Note that, as I hinted at above, it's standard for reference letters not to be visible to the applicant. I think it would be quite strange for a job listing to require that a reference letter, written by someone else, be submitted by you.
Upvotes: 2 [selected_answer]
|
2015/05/12
| 1,302
| 4,886
|
<issue_start>username_0: In Computer Science, you find yourself overwhelmed by the huge number of literature available over a certain subject.
I searched online on how to proceed and write a literature review **(in a way that i could publish it)**.
A lot of the information online will go over the same generic things, so am seeking help in this community.
I would appreciate some advice/strategy from your previous experiences
**Reference:**
Kotz, Daniel, and Jochen WL Cals. "Effective writing and publishing scientific papers-part I: how to get started." Journal of clinical epidemiology 66 (2013): 397.<issue_comment>username_1: Let me share some insights, I hope it will be useful. I will break down my answer, based on your question's main dimensions, that is *help*, *knowledge* and *motivation*. Speaking about the first dimension, it is unclear to me what do you mean, so I will leave this aspect for you to clarify and for others to address.
In regard to **knowledge**, the best advice I can give is to get a decent book specifically on writing literature reviews (i.e., Hart, 2005) or, even, a good book on research methodology, which has comprehensive enough chapter on the topic (i.e., Booth, Colomb & Williams, 2004; Creswell, 2007, 2014; Davis & Parker, 1997). This is just to start. More importantly, IMHO, after you will read some theory on writing literature reviews or research manuscripts, is to start **reading real literature reviews**: either *review/survey papers* (for Computer Science, there are specialized journals that publish such papers, for example, [ACM Computing Reviews](http://en.wikipedia.org/wiki/ACM_Computing_Reviews) and [ACM Computing Surveys](http://en.wikipedia.org/wiki/ACM_Computing_Surveys)), or simply *focused research papers* on the topic of your interest (most of them will have a corresponding section, which is usually titled "Review of Literature", "Introduction", "Background", "State of the Art" or similarly).
Speaking about **motivation** for writing a literature review, that IMHO should come from your *excitement* about (*interest* in) a particular *topic*. If you won't have excitement or, at least, enough interest in a topic, I don't see how you can obtain motivation. It's that simple. Your other questions are rather broad, but I'm sure that you will be able to answer most of those questions after reading some foundational literature on research methodology, as I recommended above.
**References**
<NAME>., <NAME>., & <NAME>. (2004). *The craft of research (2nd ed.).* Chicago, IL: The University of Chicago Press.
<NAME>. (2007). *Qualitative inquiry & research design: Choosing among five approaches (2nd ed.).* Thousand Oaks, CA: Sage.
<NAME>. (2014). *Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.).* Thousand Oaks, CA: Sage.
<NAME>., & <NAME>. (1997). *Writing the doctoral dissertation: A systematic approach (2nd ed.).* Hauppauge, NY: Barrons Educational Series.
<NAME>. (2005). *Doing a literature review: Releasing the social science research imagination.* Thousand Oaks, CA: Sage Publications.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Here are some generic points:
* Your topic has to be narrow enough, but it does not need to be as narrow as a specific research question. For example, the topic should not be as broad as "German politics" (too much literature to cover), it can be as broad as "policy change in German federalism", and it should not be as narrow as "the impact of the reunification on policy change in German federalism" (because that would be a specific research question).
* At first, read as much on your well-delineated topic as you can. Once you recognize certain sub-topics, issues, questions, patters, debates etc., begin to organize your reading and further research around these emergent sub-topics. Decide, which you would like to cover in more detail. This adds structure to both your work and to your review.
* Start to write from day one. At first, you will only be able to write short notes. Later, you will be able to arrange those notes around the sub-topics that you begin to discover. This is the outline of your first draft.
* Your review needs a structure. It should answer one or a small number of questions. Resist the temptation to try to summarize everything that has been written on a topic. Instead, the purpose of your review is to chart the territory and identify the research frontier. Someone who has read your review should be able to identify the open questions and possible avenues to answer them in the future.
[More](http://secondlanguage.blogspot.de/2009/08/reading-and-writing.html) along those lines.
And there is also [related question](https://academia.stackexchange.com/questions/3420/how-can-i-do-a-literature-review-efficiently?rq=1).
Upvotes: 2
|
2015/05/12
| 593
| 2,434
|
<issue_start>username_0: I got a paper rejected even if the associated editor and the referees liked the paper - it was more of a question of fit. Now I am sending the paper to a similar ranked journal that might be a better fit for the paper. The anonymous referee gave lots of good constructive feedback. Should I thank him in the new version? I am reluctant to it for two reasons. First, the referees or editor at the new journal might correctly infer that the paper was rejected elsewhere and that might bias them against the paper. Secondly, even if the referee was anonymous, given his tastes and comments, I am almost sure he is someone I already thanked in the paper but of course I can not be 100% certain of it.<issue_comment>username_1: Your first concern is pretty close to the point. I really see no benefit in thanking the reviewers in a subsequent submission of the paper. It just takes you pretty close to a grey area where theoretically no harm would be done, but in practice there is often some degree of bias involved. Again, no benefit, only potential harm.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't see a gray area here at all. If someone has helped improve your paper, you **must** acknowledge their contributions, either by co-authorship, citation, or acknowledgment. Yes, even in the submission. Yes, even if there is some small chance that the new referees will notice that your paper was rejected elsewhere *and* will let that fact unprofessionally bias their review.
Even if your submission is double-blind, you can always thank ███ █████ and ██████ ██ for their helpful comments on an earlier version of the paper.
Less forcefully: It is *never* a mistake to sincerely express gratitude.
>
> even if the referee was anonymous, given his tastes and comments, I am almost sure he is someone I already thanked in the paper but of course I can not be 100% certain of it.
>
>
>
You are not supposed to know who the referee is. So don't guess; just thank the anonymous referee. There is absolutely no harm in acknowledging the same person twice, once by name and once in their role as anonymous referee.
Upvotes: 3 <issue_comment>username_3: Do *you* feel grateful for the reviews? Then acknowledge them, perhaps together with the reviewers of the current round. If they are supposed to be anonymous (but you can guess/know who they are), just leave them their anonymity.
Upvotes: 1
|