date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2015/12/19
1,028
4,456
<issue_start>username_0: How does it help doing a second master's in mathematics after doing first master's in Physics for getting a PhD in theoretical Physics in Europe? Will it be helpful to compensate for a low GPA in Masters in Physics? Does the reputation of the university (master's degree) really matters for the PhD admissions if one has got enough project experiences in the field from top institutions?<issue_comment>username_1: Usually, crucial point for applying to PhD in europe is to find appropriate and willing advisor to take you as a student. Second Master will not give you any significant chance over others students ( and applicants) however, knowing research interest of professor,lab, department, program where you want to apply and having strong motivation for that field can be advantageous, Unfortunately, if you want me to be more specific I guess you should share name of your program and uni ( according to SE rules not allowed) but I am guessing that you are applying for PhD that are free and financially covered, in most of these application, good communication with advisor and program coordinator is crucial. Upvotes: -1 <issue_comment>username_2: ***Nothing compensates*** your low GPA in Physics. I think you have to learn to live with that. **But** there are several ways to improve your competency for the PhD application in theoretical physics. First let me share my experience with the application procedure in Europe. In France, from my experience, there is a well structured process for PhD intake. Your application will undergo an initial screening using various criteria, one among them is the **GPA**, other is the **masters thesis**. Next step is the interview(expect several rounds). I had a completely different experience in Germany. I was offered a project associate position for 6 months as an entrance to the PhD, predominantly on the basis of my current work(master thesis) and the recommendation from my adviser. I know several others who had similar experience. I recommend you the following. 1. Find a shot term internship in the field of your liking. If you do *really* well such that you may get a publication or at least if the supervisor could give a good recommendation for your application, then you have good chances of getting admitted. 2. Have you considered the US? A great deal of research is going on there. Appear for **GRE** if your GPA meets the minimum requirements. I cannot say for sure a good GRE score alone can get you a position, guess someone from US could say more about that. 3. If the above options seem impractical, try applying for a MS in theoretical physics. There are some specialized MS offerings like astrophysics, photonics etc. 4. Do an independent research in the field of your taste. There is always a possibility for collaboration with others in the field if you could convince them you have the potential to be a researcher. **Bottom line** IMO, doing a masters in mathematics for the sole purpose to be a potential candidate for PhD in theoretical physics because you think you can't be one due to your low GPA is not a good idea. There are other options to consider. If you are really motivated to do math, nothing stops you. Upvotes: 1 <issue_comment>username_3: I think now that i understand what the author of original post means to do,so please comment if i understood you correctly. Plan is to enter masters in math,try to get better GPA and then submit using only his new master diploma for PhD. If that is your intention,yes,that could work. As far as i know,masters program in USA lasts for one year. If you have time,will and funds to do it,sure,go ahead. You can get higher GPA and MSc in mathematics while learning something new in process. Other advice is to get your current GPA transcribed to european grade standard. Go to the websites of the universities where you intend to apply for PhD program and see if you meet minimum grade needed. Since there isn't universal grade minimum, you may find out that you qualify in one of them with your current GPA. Remember,GPA is not be all end all criteria. Personal experience,minimum grade average was 3.5 (very good). When i applied they checked my grade average to see if i meet their minimum and they never looked at it again. It was mostly administrative criteria. Interviews had much more weight in their decision to admit me to PhD program. Hope it helps Upvotes: 2 [selected_answer]
2015/12/19
1,388
6,009
<issue_start>username_0: I am from a relatively unknown university in an European country. I am one of the best performing students in my department and have won academic prizes and published several papers, which pretty much no one else does here before the PhD level. I will be applying to good universities in other countries. However, I am worried that no one will admit me. If such thing would happen, how should I proceed to get a PhD at a widely recognized university? I have thought to enroll on some summer courses organized by the universities I am applying to, to get in contact with people from there. What else could I try to do? Participating to conferences might be option as well, but it is difficult to get to talk to the people I would like to work with. To be more precise, the awards are really large (4x my monthly salary), and the publications in mid-tier venues (impact factors in 2-3 range).<issue_comment>username_1: I should think that your CV, with the information you give, will attract attention and should earn you a couple of interviews. Having papers at your stage is a bonus. I think you should just go and apply for positions. Or else, if you have someone in mind you would like to work with, you could send a (personalised!) email stating in brief your background (including achievements, and attached a full CV) and that and why you want to work with him. In short: I think your resume, as described, is clearly better than par for the course and I recommend, go for it without too much worry. Upvotes: 2 <issue_comment>username_2: I think that everyone who is applying for PhD programs should consider their backup options. Admissions are not a guaranteed thing (unless you have a truly exceptional record), and even if you get in many people decide that a PhD is not right for them partway through. Identifying a backup plan is an important insurance policy to have. This is especially important if you are applying during your last year of undergrad. If your applications don't go as well as you hoped, finding a job or other opportunity starting in April can be an impossible task. If you are sure that you would still want to get a PhD even after a disappointing application round, make sure to research 1- to 2-year opportunities that can strengthen your resume. Research jobs, RA positions, masters programs, some industry positions, etc. are all things to look at, though availability may depend on your field and situation. If possible, you should also ask your adviser if they know of any opportunities or can put you in contact with anyone who might. Depending on the culture/positions, I would even consider applying to some along-side graduate school applications if you are unsure how they will go. I worked at a PhD stepping-stone-type job for a couple of years before applying, and we regularly interviewed applicants who were also applying to PhD programs. Many places won't be so understanding, so be careful with this, but we were fine with that situation. Upvotes: 3 <issue_comment>username_3: I know of many international students with the same issue. They're very bright and have a great academic resume, but most of the professors at top US Universities haven't heard of their schools. Because of this it's hard for an admissions committee to accurately judge how good they really are. What often happens is that they end up enrolling in a Master's program at a good school, get top marks, and maybe do a spot of research while they're there. Afterwards they can use that to springboard into a good PhD program. Upvotes: 2 <issue_comment>username_4: Based on what you have said, I would tend to agree with some of the other commentators that you shouldn't have too many problems securing a position. However, if you do have some challenges with getting accepted for a PhD program at a top university it will probably be due to uncertainty about about (i) the credentials you have and thus your working ability, and (ii) your ability to work with a given supervisor, or team. To provide more certainty about these abilities you could: 1. Consider doing a masters or short course at the institution you want to attend, or one of a similar level. This helps as it can provide an accepted signal of quality (e.g., "I was ranked no.1 in my cohort within [relevant course], in [respected university]"). If doing this you should try to do projects/classes with a sample of suitable supervisors as doing so will make it a lot easier to convince them that you are a reasonable person that they could work with than could be determined via email/skype (as is the case now). 2. Consider finding a professor in your area to work on a paper with. This could be a foreign professor that you would like to work under (e.g., one who is abroad), or one who is local but who has international contacts in places you want to go (e.g., has co-authored with academics at these institutions) and is known and respected internationally. If you work with a foreign professor and it goes well, they are much more likely to want to supervise you and invest the effort in supporting your application (and you will probably get in on that basis). If you work with the well connected local professor, your work with them will be a strong signal for the potential supervisors in their network, that (i) you are good worker, and (ii) that you are someone who is easy work with. Plus you can get a recommendation if you are lucky, and that could carry considerable weight. 3. Find some reason to be in an area with good universities (e.g., do an English course in Boston), then arrange a meeting with the supervisors who you are interested in working with. This will give you a chance to convince them that you are genuine, and for them to get a sense of whether they can work with you. In my experience face to face meeting are much more likely to spark a desire to collaborate (e.g, within a supervisor student relationship) than email/skype exchanges. Upvotes: 2
2015/12/19
884
3,887
<issue_start>username_0: I am writing a paper. I have performed the experiments and I also have the results. However, there is a section in my paper where I have to explain my observation upon which the whole idea is based. To ensure that I explain my observation correctly, I want help from a professor. I will be basically asking him questions along the line of: ‘why does this happen?’. Before asking such questions, shall I inform him that I’m writing a paper and need help in explaining something? I’ll definitely thank him in the acknowledgement section but is it required to add him as an author?<issue_comment>username_1: They might merit a "thanks" acknowledgement, but probably not being credited with *doing* (a substantial part of) the work. Upvotes: 3 <issue_comment>username_2: > > To ensure that I explain my observation correctly, I want help from a professor. I will be basically asking him questions along the line of: ‘why does this happen?’. > > > Whether this merits coauthorship depends on whether the professor is making a significant intellectual contribution. If you are just getting straightforward explanations of well-known theory (which you need because you are an experimentalist, not a theorist), then there's no need for more than an acknowledgment. On the other hand, if nobody knows why these things happen and the theorist comes up with a novel explanation for your data, then coauthorship sounds appropriate. There may also be intermediate cases, such as theory that is relatively well known but requires some calculation to apply to your case, in which case you would need to discuss whether coauthorship makes sense. > > Before asking such questions, shall I inform him that I’m writing a paper and need help in explaining something? > > > Yes, certainly. If you get only quick answers that do not seem to involve much effort, then you should mention the paper but there's probably no need to discuss coauthorship. If you get answers that seem to involve serious effort, then you should discuss coauthorship. (Discussing it doesn't necessarily mean he will be a coauthor. Rather, you just need to be careful to make sure there are no misunderstandings.) Upvotes: 4 [selected_answer]<issue_comment>username_3: The standards for coauthorship vary by discipline, but basically yes, you should probably mention that you are writing a paper before you ask your question, so you can discuss authorship up front as well. I'm not sure though why you are worried about having a professor as a coauthor. It's well established that papers with better known authors, or even authors from better known institutions, are more likely to be read and therefore cited. I am a professor, and I personally tend to be generous with coauthorship. If I think that someone has made a substantial contribution, or might make one, I tell them that I wouldn't mind if they were a coauthor. Usually people don't take me up on that, and I just acknowledge them. Sometimes they decide they have made a substantial contribution and want their name on it, which is fine by me. It also depends where you are publishing. Some journals *require* that you say what the contributions of the authors are, so there's no problem with saying X did almost all the work, Y had one cunning idea. You can also just say that in the acknowledgements if there's no other place to say it, but be careful to do it tactfully if you do it at all. In general, I think it's better not to stress about authorship so long as you write enough papers with enough different people. If you always write with one famous person, there's a chance they will get all the credit. But if you write with a number of combinations of authors, or if you establish a reputation through conference talks or social media, then it will be clear what part of the voice in your papers is your own. Upvotes: 2
2015/12/19
2,541
10,659
<issue_start>username_0: I am going to my first conference and my mom wants to join. (She can probably get funding from her university to accompany me, even though she is in a different field.) Will it reflect badly on me if we stick together?<issue_comment>username_1: I have seen non academic mothers accompany their children, or more accurately their grandchildren, to conference and there is absolutely nothing wrong with this. Even in the absence of grandchildren, it would be fine. For example, lots of conferences have time for an afternoon of site seeing and people often add on a couple of days of vacation. As your mother is an academic in a related field (presumably since you said she can get funding). It makes perfect sense. As for sticking together, within reason that is fine. This is especially true if she can introduce you to people. Just make sure you keep the relationship professional. Upvotes: 3 <issue_comment>username_2: It's unlikely anyone would notice or care. Anyway, how would they know you are parent and child? Having said that, in the UK a lot of conferences are funded by EPSRC or similar agencies, which in turn are funded by tax payers. As a tax payer, I have to say the folks who go on jollies to unrelated conferences can grate. Overall it seems unnecessary and while I don't think it would necessarily look "bad", it would likely be a missed opportunity to talk to other people. Upvotes: 2 <issue_comment>username_3: You ask about "stick together". This is probably not an ideal idea. Generally, in a conference, you want freedom of action. Going with your mother may induce you to "talk to the known" person whenever you are unsure; i.e. you talk to her whenever you are not sure how to approach people - the notorious "escape route" - but this is exactly what conferences are not for. When I went to my first conference, I made a point to talk to as many people as I could. At the beginning, it is quite a leap of faith. But later you find, it is fun, you get to know people, and you get to know new research directions and ideas. Sometimes it feels difficult, but remember: all you people have one thing in common - the topic of the conference. You are very unlikely to go wrong starting a conversation about that. If you are unsure how to start, start with people which look like they are alone at the conference, and probably a bit lost. They will be grateful to get somebody to talk to; you may meet interesting colleagues (don't think that, because they are alone, they are not good scientists - good scientists may not be good at socialising), and notch up a good deed, to boot. It is fine to take your mother to social events and the like, but I recommend that, if she joins the conference, say, because she is interested, that she stays mostly in the background and does not send signals of "supervision" to others or - even more importantly - to you. If you can keep that balance, then, by all means, bring her with you. Upvotes: 7 [selected_answer]<issue_comment>username_4: Remember that a conference is first and foremost a major professional opportunity for you. So long as you will meet as many other conference participants as you would otherwise, and it wouldn't affect your performance giving the talk, then I see no reason not to bring her. My mother sometimes asked to come to conferences with me, but I didn't let her, because I was afraid seeing her in the audience during my talk would put me off and I wouldn't do as good of a job giving it, which would negate the entire purpose of attending the conference. My mother could be very critical. Unfortunately she didn't live long enough to get to the point where academic talks got put on YouTube all the time, but she did read my papers. My father enjoys watching some of my talks on line now. On the other hand, my partner is also an academic, and we often attend each other's meetings, and even go to different talks in parallel sessions and learn from each other at the meeting. My partner is very supportive and I love seeing their smile in my talks. If your mother is a good academic, I could believe that having her at a conference might be similar to that. Upvotes: 2 <issue_comment>username_5: A couple of things that come to mind: 1. It's a bit troubling that you say "my mom wants to join." What do *you* want? 2. Also, your first conference is an exciting moment in your academic career that you'll likely remember for a long time. Such an occasion can be a really good opportunity to strike out on your own, be independent, feel free to invent yourself as the person you'd like the world to see you as (and, as others have said, make maximal use of the professional opportunity of being at a conference, though personally I'm less bothered about that). With all respect due to your mom, who I'm sure is an extremely nice lady, it sounds probable to me that her presence will hinder such personal growth on your part, though she may not realize that. Just my two cents based on the very small amount of information you've provided. In any case good luck, and have fun! **Edit:** I also want to add, just in case you or anybody else reading this care, that I think "will it reflect badly on me" is the wrong question to ask. What I mean more precisely is this: if you happen to be a mature, strong-willed, independent person who just happens to get along fabulously with your mom and think you'll enjoy having her around, I'd say bring her along, and it doesn't matter what other people will think; life's too short to care too much about that (or, [as <NAME> said](http://news.stanford.edu/news/2005/june15/jobs-061505.html), "Don't let the noise of others' opinions drown out your own inner voice"). On the other hand, the advice I wrote above is a way of helping you reflect on whether by letting your mom come to the conference with you, you may in fact be holding yourself back from *becoming* a mature, strong-willed, and independent person. If that is the case, then although having your mom at the conference may indeed reflect badly on you in the eyes of some people, *that is not the reason why you shouldn't bring her* -- rather, you shouldn't bring her but for a different reason, because of the more intrinsic fact that having her be such a dominant presence in your professional and social life is simply not good for you. Upvotes: 5 <issue_comment>username_6: Would your mother attend the conference even if you didn't? If so, then sure -- but make sure not just to follow her to the talks she is interested in. If not (and the question makes me think that's the case), I think it would be a bad idea for her to join you. 1. Conferences are professional events where your goal should be to meet and share ideas with others in your field. You will have something to talk about with them, but your mother won't. 2. How would her being there impact your ability to socialize? This is going to depend on the conference, but people generally like to get drinks in the evenings. It would be odd if you went out with a few graduate students and she came along. 3. Irrespective of how independent you can be, you should think about the signal it sends to your potential employers (i.e. people on search committees). Think about job applicants who bring their mothers to an interview and ask yourself whether that hurts their chances of getting the job. Upvotes: 4 <issue_comment>username_7: I would chime in that if she wants to come to celebrate your success(es) and witness you if it's something you are publicly participating in, sounds great. I would say terrific. However, if she is coming to overtly - overly - mother you when you are an adult? I recently learned terrific actually means terrifying by definition. So maybe it is terrific. So in that case... hmm Was hard to tell from what you wrote, in that case, time to cut the ole umbilical cord, probably was time some time ago, in fact, not being sarcastic, I'd argue that cord was cut a long time ago. That said, I'd welcome her to come and celebrate your participatory role in the conference, sure, bring grandma Esralda too, your family is likely proud of you and wants to support you. If the former, go for it- if the latter? I'd politely state you are now an adult, and would prefer if she only came to witness the conference but did not bring any unnecessary focus during your academic responsibility post conference. Best to you and her. username_7 Upvotes: 1 <issue_comment>username_8: There are at least three issues here: conference funding, conference participation, and joint travel. It is theoretically possible, but unlikely, that your mother's department has so much travel and conference money that every researcher gets to every conference they want. It is also possible that the conference is one your mother should be attending, and your participation is only a tie-breaker among conferences. If neither of those cases applies, consider the following scenario: * There is a conference you should attend. * Your department regrets to inform you that there is insufficient travel budget for you to go. * You find out they funded another department member going to an off-topic conference primarily to hang out with a relative. How would you feel? Unless it is a conference she should be attending for her own research, I think your mother should either not attend or pay for it herself. As others have pointed out, to get the full benefit of the conference you need to get into casual conversations with other participants. Having someone with you who is not a specialist in your research area could be a hindrance. On the other hand, if she is attending primarily for your company, telling her you can't have lunch with her today because you want to continue a discussion over lunch would be unfair to her. If the purpose in her going is spending time together in a different location, consider extending your stay for a few days after the conference and having her join you for that time as a vacation. That way you can concentrate on the conference while it is running, and give your mother your full attention during your joint vacation. Upvotes: 0 <issue_comment>username_9: The obvious answer is a RESOUNDING "yes." Invite your mom to come on her own dime and credentials. When you present your paper, stop, point her out to the seven people in attendance at your session, and acknowledge her, calling her out as a beloved colleague in the unrelated field of xyz. Blow her a kiss, say "thanks, mom!" and go on with your paper. Spend MOST of your time at the conference schmoozing with colleagues. Upvotes: -1
2015/12/19
981
4,089
<issue_start>username_0: I am a PhD candidate in a small country. I have two Master degrees, one from an American university. I am also an experienced professional and an expert in the subject of my PhD studies (in my country, clearly). I need to publish a paper in an international journal as a condition to obtain my PhD. How do I choose a journal? I cannot afford years of waiting for the paper to be published, submit it and be rejected many times. I am looking through the journals, and there is simply no way to know which one is suitable for me. Any help? I understand about looking for journals which had previously published similar articles, but how to know that it is a journal which is likely to accept my article? Thank you!<issue_comment>username_1: First, you need a list of the journals that are relevant to your field. Here are some ways to create that list. I recommend that you do all of these, not just one or two. * Ask your supervisor. * Ask your librarian. * As you read articles, take note of the journals that the articles are published in, especially if the article is interesting or relevant to you. Now that you have a list of relevant journals, it's time to select the one that's right for your article. Here are some things that will help. Again, I recommend that you do all of these. * Ask your supervisor. * Search the literature for articles that are similar in some way to yours. See where those articles are published. * Visit the home page of journals that you're interested in. Read their "aims and scope". Be warned that you have to read a lot of different "aims and scope" writeups before you catch on to the lingo. Is your article covered in their list of topics? Do they only publish major breakthroughs in the field, or do they publish more modest articles? * Look at some articles that are published in journals that you're interested in. Are they roughly similar in focus? Are they similar in quality? (Don't underrate your own work. Beware the imposter syndrome!) * Since you've written an article, you must have done a fair amount of research. So ask yourself: "Where would I go (i.e., what journals) to find an article like this?" A few miscellaneous tips: * If you're not sure if an article is appropriate for a particular journal, you don't have to submit it and wait 6-12 months for an answer. Write up a 1-2 page summary of your article (called an "extended abstract"), and send it to the editor asking if it's appropriate. They'll probably answer within a week or two. * Check [Beall's list](http://scholarlyoa.com/2015/01/02/bealls-list-of-predatory-publishers-2015/) to make sure you're not dealing with a predatory publisher. * Check the copyright/license terms. (E.g., will they let you self-archive?) * Check if there are any publication charges that you will have to pay. I would only pay a fee if my article will be open access (free for others to download), and only if the amount is reasonable. The fee may be lower if you're in a developing country. Upvotes: 3 <issue_comment>username_2: One place to find an appropriate journal is your very own list of references. Where are the most relevant articles that you cite published in? Those journals might be good places to start. You should also think about whether your long-term plans include staying in academia. If not, pick whatever journal satisfies your program's requirements. If you do, however, you should start at the top and work your way down if/when the paper gets rejected (hopefully with useful comments from reviewers). That greatly increases how long it takes to get the article out, but once published, it'll be a lot more valuable to you. The first thing search committees look at is where you have published. If it's only in journals that people have never heard of, you may be in trouble. Upvotes: 2 <issue_comment>username_3: Most of the big publishers like Elsevier or Springer Nature have a tool called "Journal finder" o "journal suggester" where you can place your abstract and they give you a list of suitable journals to start with. Upvotes: 1
2015/12/20
762
3,306
<issue_start>username_0: Recently, I had a Skype interview with a professor several weeks ago, and professor also promised me the PhD offer. My GPA is not high enough to meet the requirement of Department, but she said she would argue for me. After that, I also sent an email to professor and tried to talk about two academic questions, but no reply had been received. Recently, I checked my Skype and found professor has already delete the contact with me. And after I submitted the application, I also sent an email to inform the professor, no reply has been received either. So can I think it as a rejection?<issue_comment>username_1: The professor is kind. She could not say *no.* She wished to give you an opportunity. But maybe she could not manage. So she deleted you. A PhD is required if you are looking for a job, but if you create an opportunity for others, you don’t need that. You can learn it yourself, there is no alternative. Upvotes: -1 <issue_comment>username_2: No. I don't think you should assume that this is a rejection. * Deleting you from Skype does not sound unusual. If they added you for a specific purpose and don't intend to be contacting you regularly then they may just want to avoid notifications popping up all the time about when you are logged on. * Professors are busy. A lack of email reply, especially at this time of year when they may be about to leave for the holidays (depending on where you are in the world), may simply mean they not have got around to replying to you. * It's possible that the department you are applying to has strict procedures about job/studentship offers, such that the professor is not allowed to give you certain information until it has been formally confirmed by the school. Depending on the exact questions you asked, the professor may feel that they need to wait for the proper process before getting in touch with you again. Assuming that you have gone through an official application procedure, which it sounds like you have, you will get official notification at some point from the department as to whether or not you have been successful. As I said above, be aware that might be a bit delayed due to the holidays. But I don't think you should assume one way or the other until you get the confirmation. Good luck! Upvotes: 6 [selected_answer]<issue_comment>username_3: Well, I think it may be rejection, but nobody knows for sure of course. The professor feels awkward for not being able to argue for you and to withstand pressure of other members of the committee. That is why, deleting you from the contact list is a convenient way to avoid explanations why you are not admitted. Anyway, I wish you best of luck in your graduate studies, wherever it takes you. Upvotes: -1 <issue_comment>username_4: I was having several Skype interviews lately. Professors respond in different ways. One professor may keep you guessing in the middle of interview regarding acceptance/rejection, while others may hint you on the spot regarding the outcome of the interview. My experience so far does not suffice to make a statistics. Let’s hope you get the position. The problem is the limited number of positions and the huge amount of applicants. You think you are good and then someone with better experience comes over. Upvotes: 1
2015/12/20
1,266
4,394
<issue_start>username_0: I am a freshly graduated Ph.D. I want to improve further my mathematical writing (in English, if this matters). I am looking for books/resource that address good mathematical writing. I can find some books, for example, *A Primer of Mathematical Writing* by the AMS. But I wonder if there are any other new books or resources.<issue_comment>username_1: Coursera has a specialization called [Reasoning, Data Analysis, and Writing Specialization](https://www.coursera.org/specializations/reasoning?utm_medium=listingPage) I recommend watching the first three weeks of Think Again: How to Reason and Argue. You can consider the course English Composition I as well. There are other MOOCs dealing with your topics as well. Search coursetalk.com to find some MOOCs. Upvotes: 2 <issue_comment>username_2: Can you elaborate what type of writing? For example, <NAME> brings forward Euclid in the Rainforest, which includes mathematical concepts but is also in part biographical and simply an enjoyable read. I am not sure if you mean tools such as Mathematica or LaTeX for symbiology ? Or perhaps something like SHARELaTeX at <https://www.sharelatex.com/learn/Mathematical_fonts> ? Upvotes: 0 <issue_comment>username_3: I recommend the paper (series of class notes) [***Mathematical Writing***](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjL0arDvOvJAhWGbR4KHZQFAH4QFggdMAA&url=http%3A%2F%2Fjmlr.csail.mit.edu%2Freviewing-papers%2Fknuth_mathematical_writing.pdf&usg=AFQjCNEsoa5aRH_MOBsRI3qA8T_XLlgUjQ), 1989, written by [<NAME>](https://en.wikipedia.org/wiki/Donald_Knuth) ("*Art of Programming*" author, among other things), <NAME> and <NAME>. Here is a [description from Stanford](http://www-cs-faculty.stanford.edu/%7Euno/klr.html): > > This booklet records the class on *Mathematical Writing* led by <NAME> at Stanford in 1987. Among the 31 lectures are guest appearances by <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. > > > > > > > We saw many examples of "before" and "after" from manuscripts in progress. We learned how to avoid excessive subscripts and superscripts. We discussed the documentation of algorithms, computer programs, and user manuals. We considered the process of refereeing and editing. We studied how to make effective diagrams and tables, and how to find appropriate quotations to spice up a text. Some of the material duplicated topics that would be discussed in writing classes offered by the English department, but the vast majority of the lectures were devoted to issues that are specific to mathematics and/or computer science. > > > > > > > > > It is helpful not just for mathematicians but scientists and technical communicators of all sorts. Upvotes: 4 <issue_comment>username_4: Knuth, mentioned in another answer, is trustworthy and a good author. The book mentioned by the OP maybe be useful to beginners, but I don't agree with many of Krantz's (the author) opinions about writing. In general, people have various opinions and tastes about writing, so it's good to see a spectrum of opinions and determine what approach fits for you in a particular situation. And now that you've started thinking about these things, read mathematics critically so you can build up examples of what you think works well and what doesn't. Anyway, some other references are: [Terry Tao's advice on writing](https://terrytao.wordpress.com/advice-on-writing-papers/), which includes many links to pieces written by other authors [Milne's tips for authors](http://www.jmilne.org/math/tips.html), which includes a reference to Krantz's book and an essay of Hersh) It might also not be a bad idea to read general advice about writing (not just mathematical/technical writing). Upvotes: 3 <issue_comment>username_5: These two books are spot-on and I'm surprised they haven't been mentioned yet. *Handbook of Writing for the Mathematical Sciences*, <NAME>, SIAM, <https://www.siam.org/books/ot63/>. *Writing Mathematical Papers in English*, <NAME>, EMS Publishing, <http://www.ems-ph.org/books/book.php?proj_nr=34>. Upvotes: 2 <issue_comment>username_6: I have come across good book for Mathematical writing with the title`Write Maths right` by Prof. Radhakrishna. Upvotes: 1
2015/12/20
255
1,051
<issue_start>username_0: The title basically says it all. At which point in the refereeing process does it become acceptable to mention on my list of publications (e.g. the one on my website) that a paper has been accepted for publication at Journal X? Specifically: * is it OK when I receive word from the editor that the journal is "ready to publish my paper after a revision"? * is it OK when I confirm the final version of paper? * ... or must I wait until the paper is actually published?<issue_comment>username_1: I'd do it when getting the final acceptance letter. Before that, you can say "submitted" or "under revision". Upvotes: 4 [selected_answer]<issue_comment>username_2: If you have to make revisions still, you should list it as *Revise and Resubmit at X*. That's quite stronger a signal than *Under Review at X:* anyone can submit a paper to any journal, but an R&R implies that it's quite likely to be published there. Once you get the official acceptance letter, you can list it as *Accepted at X* or *Forthcoming in X*. Upvotes: 2
2015/12/20
2,077
8,830
<issue_start>username_0: Right after school I decided to pursue a degree in physics. It went well at first but then I started to struggle with depression and other personal issues. Eventually I started to fall behind and felt overwhelmed by the coursework, I lost my passion for the field and just felt frustrated. I noticed it was time for a change, so after moving away and attending another university things were lightening up a bit. Since I was too far behind in my coursework I was afraid of losing my financial support. Thankfully I was always good at programming, so I found a nice job to cover my expenses and switched fields to Applied Computer Science. Ever since then things are working out great. I got top grades, am nominated for a stipend, and even got the time to volunteer and participate at some minor research groups. Anyways I feel bad for not being able to finish my physics degree. I feel like I completely wasted 2 years of my life and just don't know what to do/think about this. I do miss every aspect of physics now and secretly regret my decision every now and then. **Would it be reasonable to get my physics degree (at least finish the bachelor) when I am done with my current field? If not, how can one deal with the impression of failure and regret?** (English is not my first language, any edits are warmly welcomed)<issue_comment>username_1: Dont worry, you can always apply for a master's degree related to the field of physics. It is nothing unusual. You should let it go, and multidisciplinarity is in expansion in any research and future career related of your interest ( assuming that you want to continue your studies). Upvotes: 3 <issue_comment>username_2: Difficult times occur. This is normal. Unless you did something foolish that caused your failure (and nothing indicates you did), there is no reason to feel regret. You cannot control everything in your life. Whoever does not attempt anything, does not fail; people of action fail regularly. Failure is a central part of life, and there is no reason to regret attempting to study physics and then finding out it's not for you at this phase of your life. In fact, I actually recommend testing ones limits early, and knowing where they are makes one a more mature personality. Your current success is point-in-case. And, keep in mind: limits may be expanded, albeit slowly. Therefore, my recommendation depends on whether your regret refers to ceasing to do physics itself (the topic!), or it is rather the failure to complete (the studies!) that you regret. If you feel that you would have liked to do physics (the topic!) after all, perhaps you can consider reactivating your studies (or add a Master's degree as suggested elsewhere). But if it is only the missed opportunity, then I recommend to notch your physics studies up as valuable experience and move on the path where you have made progress and which clearly suits you well. Upvotes: 6 [selected_answer]<issue_comment>username_3: Academic success is measured in your best, not your worst. If you write 100 terrible papers and 1 good one people will think of you as a good scientist. Similarly if you get your physics degree after failing it you still proved that you have a high understanding of physics. If an employer has the choice between someone who got a physics and computer science degree at first try and someone with the same degrees on the second try then the first will be valued a bit higher, but considering how many people have a physics and computer science degree it will probably never come to that. That said, you don't need a physics degree to prove you are not a failure, you already did that with your computer science degree. My advice would be to get a physics degree for the love of physics, but not because you don't want to look like having 2 years wasted. Upvotes: 2 <issue_comment>username_4: When you are in certain situations sometimes it's best to have a change of environment and that is what you did, you changed to something you are clearly very good at.You should not feel any sort of negativity towards yourself because you felt at the time that the physics course wasn't working for you,lots of people change their course.Think of it this way, had you stayed on your physics course would you have been able to cope with all the stress?Trust me things happen for a reason and I feel like you doing this new course has probably helped your mental stability so you are much calmer.You didn't waste the two years, if anything you gained two years worth of experience that you could come back to if you ever wanted to.We think the worst thing is to waste time and money but it's not the worst thing to ever let waste away is your happiness and how you feel.I'm sure the people who care for you feel the same way. SO don't feel sad be proud of your achievements because you have done a heck of a lot. WELL DONE MAN! Upvotes: 1 <issue_comment>username_5: After completing school and academic degree I decided to start the dissertation. Being accustomed that everything runs its usual course, I simply chose a field which was convenient for me (location). Well, after half a year a nasty realization crept in: The interest was not enough to get me through the dissertation. Try as I might I could not muster the energy to get through. So with rising horror I realized that the normal academic career which was nearly half of my lifetime at that point would come to an end. So I can relate to your feelings. Yes, I felt very miserable at the point and no, the rational consolations did not reach the heart. It is completely normal you feel this way and you need to give it room. Take your time. > > I feel like I completely wasted 2 years of my life and just don't know what to do/think about this > > > No, you did not wasted your life. Only after a time the realization crept in that I *simply was not ready for the task at this point*. That was a sobering experience. I also realized that failure is the only way to find out your boundaries and wrong assumptions; *you learn much, much more from failures than from your successes*. Sorry if that sounds like good sounding pieces of wisdom, but it is true. You did not waste your time because you are pining. If an university would never give out degrees or other forms of commendations, no rewards, no prizes, no notes...would you still have joined the course ? It seems like it. You still learned and enjoyed it, you had a good time. Simply give yourself time, you will realize what the right course of action is. Upvotes: 2 <issue_comment>username_6: If you feel a hankering to learn more physics, take more physics classes. You can start now, with one course per semester, alongside the courses you need for your major. After you graduate, you can continue taking one course per semester if you wish, or you can take several at a time. You can take a class while working full time, if you make sure not to put in extra hours in your job, and if you arrange your schedule carefully. You may need some more math classes. Certain math skills and the right sort of mathematical dexterity make physics courses much more manageable. See how you feel as you go along. Let your interest in the subject matter determine how far you go with physics. In other words, don't pursue a degree just for the title and the diploma to put on the wall. That said, if you find, over time, that it is an enduring interest and you want to keep going, probably the simplest thing would be to take the necessary undergraduate math and physics classes (and probably a couple of chemistry and engineering classes to complement the physics) as a non-matriculated student, and apply to grad school. There again -- you don't need to decide a priori whether to aim for a Master's or a PhD. Keep your options open. It's great that you have a skill set that makes you reliably and enjoyably employable. You have your whole life ahead of you in which to study anything and everything that interests you. My favorite undergraduate math teacher (Calculus I) went back to school to study math at the age of 50. That showed me that there are many possible paths to knowledge. Upvotes: 1 <issue_comment>username_7: Your pursuits in Physics will reward you in ways you never expected. While you may not have achieved the degree, you did gain knowledge and understanding. You can always finish the degree, if a degree is necessary for a field you want entry into. I would celebrate what you've accomplished, and know you are all that much closer to getting that degree if you should ever choose to do so. You might want to try just one class and chip away at it over time so it won't be overwhelming. You didn't fail. You simply need a wider time frame for your goals :) <NAME> Upvotes: 2
2015/12/20
4,644
19,299
<issue_start>username_0: I have come into conflict with co-authors when being asked to do things that I consider to be questionable. 1. Once I was told to try every possible specification of a dependent variable (count, proportion, binary indicator, you name it) in a regression until I find a significant relationship. That is it, no justification for choosing one specification over another besides finding significance. The famous fishing expedition for starfish (also known as P-Hacking). 2. In another occasion I was asked to re-write a theory section of a paper to reflect an incidental finding from our analysis, so that it shows up as if we were asking a question about the incidental finding and had come up with the supported hypothesis *a priori*. The famous hypothesising after results are known (Harking). In both cases I refused to comply and explained my reasoning, what led to conflicts with the other party. I tried my best to not sound accusatory (not to give the impression that I doubt the ethics of the other party), but it nonetheless led to attrition and a worsening of the working relationship. In the long argument that followed, I was told that 'social science is not done as the natural sciences,' and that I was 'too inflexible,' 'too positivist,' and that everybody does these things that I was being asked to do. The argument culminated with me being asked to 'stop obstructing the progress of the paper,' what made me feel very frustrated. Since then I have seen several cases of what I suspect to be this type of research practice. For example, PhD students coming to me to ask about what they should change in their models so that their results come out significant, and people working at the same computer lab as me asking me for the same type of help. I do consider these things to be seriously questionable from an ethical point of view, and would like to be able to argue against them effectively. However, the other parties are usually experienced researchers or students under the supervision of an experienced researcher. As a young researcher, I feel that I'm at a disadvantage when arguing against. It is often the case that I'm arguing against the instructions of someone who has more experience, publications, and, supposedly, knowledge than I do. Is this one of those cases where we can't do much but try to be the 'change that we want to bring about,' shud it, and just make sure that we are doing the right things ourselves? Should we speak up more often? If so, any good strategies to be more effective and convincing? p.s. The tag is social sciences because of my field, but I reckon that this happens in other areas as well, and I welcome input from other fields. --- EDIT 1: In example 2), at no moment anyone suggested that we would confirm the new hypothesis in a new set of data. The intention was to pretend that we got it right from the onset, which is why I objected. EDIT 2: Just to make clear. I am aware of the right way of doing these things (i.e. cross validation, confirmatory analysis in a new dataset, penalising for multiple statistical tests, etc.). This is a question about how to argue that p-hacking and harking are not the way to go. EDIT 3: I was unaware of the strong connotations of the word misconduct. I have edited and replaced it with 'questionable research practices'<issue_comment>username_1: Your instinctive concern about creating hypotheses out of data and pretending they were there from the outset is on the right track: In statistics, the so called chi-square test can be used to compare data with models which have been fitted out of the data themselves. However, for this, the chi-square test must be adapted to essentially "penalise" one's extraction of the parameters when testing how significant the match is. This is not easily generalised to other setups, so in general learning theory and practice, one splits the data into multiple groups. For example, where one part is used to optimise the parameters, one, at first unseen, part is used to optimise the generalisation, and the last, unseen, part **never** feeds into the model construction and is used to test how well the first two stages worked. This is called "cross-validation". Perhaps you can suggest (or simply introduce) to your group such a methodology, by splitting the data randomly into different components; out of one you construct the model, which then is tested with the unseen data. Details of how to do the split would depend on your domain. This way, you have the confidence that the model is predictive. For this to be sound, you need to make sure that it is not using the complete dataset in any form (not even through one smart colleague that remembered that the data are parabolic on the whole). Best is to not ever look at the unseen data until the model is complete. As for post-hypothesising, I found this often not even to be necessary. You might start with a hypothesis, then discover it is not valid, but then find another, interesting phenomenon instead. This is called "discovery" and the coolest papers result from that. If the top journals of your field do not accept such a style, because they want the standard "hypothesis-experiment-validation" cycle, then the problem lies deeper in your community than with your colleagues. In short: fitting models out of your data and comparing match is ok if you have a way of penalising that extraction (as in the chi-square). Failing that, you can do "cross-validation" for sound results. Finally, instead of post-hypothesising, my suggestion is to hypothesise, say, invalidate the hypothesis and demonstrate the emergence of a different hypothesis. Upvotes: 3 <issue_comment>username_2: This sort of thing happens in both the social sciences AND physical sciences. For instance, often a scientist will collect data to test a theory but will also collect lots of extraneous data. Analyses on these extraneous data often should be considered exploratory and labeled as such (because significant results could be due to the multiple tests) [As another example, you don't want to know how often chemists repeat an experiment until they get a good yield, then stop and report that yield without mentioning that that was the best in 20 experiments!] The fastest solution is to agree to do the multiple analyses, but then tell what you did in the methodology section. If you say that you analyzed it several ways and one way showed significance, readers can decide whether or not to believe the result. Just tell your co-authors that not mentioning that you did multiple analyses is leaving the research improperly described. However, you can (occasionally) save the day. If, for instance, you did 10 different analyses and picked the best one, you'll be ok if the result would hold under a Bonferroni correction (i.e. instead of requiring significance at the 0.05 level, you require significance at the 0.05/#tests level). So it the final test shows a p-value such as 0.000001, you probably are on safe grounds. Another approach is to a priori decide that some tests are obvious (confirmatory) and some are just searching around the data (exploratory). Then you can demonstrate the confirmatory results, while labeling anything interesting among the 'exploratory' results as 'needs further research'. That is, you can mix well-founded tests with 'data dredging' as long as you acknowledge the difference between the two sets of tests. But if it isn't possible to rescue the result, I'd go with insisting that they describe what they did, with the comment that if they are embarrassed to describe it, they shouldn't have done it. :) You might also add that it is often obvious (at least to statisticians) that a researcher has pulled this trick. When we see a test in isolation that would not occur to us to be the obvious approach, or a hypothesis that we'd not choose a priori, it looks suspicious. For instance, I recently read a paper that claimed that a certain group of people tend to commit suicide more often if they were BORN in the Spring. It was clear that JUST testing the effect of birth in Springtime was not something that would occur to anyone, without testing the effect of birth in other seasons. So they probably had a spurious result due to multiple comparisons. Upvotes: 6 <issue_comment>username_3: This is an excellent question. I do think you (and others in similar situations) should speak up, but I realize this is very difficult to do. Two things I'd suggest: 1. Try to figure out if the people you're dealing with *understand* that the methods they're proposing (p-hacking, etc.) are dodgy or not -- i.e. whether it's an issue of ethics or ignorance. This is harder than it may seem, since I think many people genuinely don't understand how easy it is to find patterns in noise, and how "researcher degrees of freedom" make spurious patterns easy to generate. Asking people, non-confrontationally, to explain how doing tests on "every possible specification of a dependent variable" and selecting those with "p<0.05" corresponds to <5% of "random" datasets having a feature of interest would make this clearer, and would perhaps give you insight on the question of ethics or ignorance. I'd bet that a good fraction of people aren't deliberately unethical, but their cloudy grasp of quantitative data obscures ethical thinking. 2. Something I've found helpful in related contexts is to generate simulated data and actually *show* the principle that you're arguing. For example, generate datasets of featureless noise and show that with enough variables to compare between, one can always find a "significant" relationship. (Obviously, without correcting for multiple comparisons.) It may seem strange, but seeing this in simulated data seems to help. Good luck! Upvotes: 5 <issue_comment>username_4: I hope you don't mind, but I want to take this chance to give you a different set of advice from what you are asking for - to advise you not to take this approach to tackling this issue at this point in time. I am making the assumption that you want (i) some level of academic success - enough to support yourself, and (ii) to improve the quality of research, and the social benefits that result. Assuming I am correct, I don't think you should pursue this argument **(*at this stage*)**. I don't think you should pursue this argument **(*at this stage*)** if you value your academic career as it will cause you to burn important bridges and close doors. For instance, if your supervisor p-hacks and you expose and destroy him for it, then you will have lost your main support and dramatically reduced your likelihood of being able to secure a career in this area. As related to this, **(*at this stage*)** I don't really think that you are optimally placed to challenge the negative influence of p-hacking. Here are a few reasons why I think this. First, as related to what I said above, if you are in the infancy of a career within a specific social system then you cannot easily impact the behaviour of those who are already established in that career, and who neither know nor respect you. Second, you cannot fully understand why the system operates as it does, nor the levers that need to be pulled to change that operation, until you are more familiar with it. You might be able to make a micro level difference (e.g., you expose some people you work with), but I don't see that as likely to be very effective, as it will cripple you to do so. To sum up my thoughts with an anecdote: imagine that you grow up in a city where all the teachers are corrupt and incompetent. Do you think it would be best to protest against them when are in the school? Probably not, as you would achieve very little, and the teachers would probably use their power to prevent you from graduating and essentially ruin your life. Alternately, would it be better to tolerate the teachers flaws until you are out of the system (or higher up) and in a position to actually change things? I would think so as in that case you might end up in a position of authority, and have the resources available to do something to change the teaching system. Of course, all of this is just my opinion and I could see many ways in which you could argue against it :) **28/12/2015: Adding some more content to explain and address comments.** I would like to put more emphasis on my main point; to appeal to pragmatism and wait for a better time to act. Personally, I think that there is a time and a place for activism, that sometimes it is best to keep your mouth shut and wait until you have a better chance to do something rather than to speak up and get shot for nothing. Thus, in any case where activism is an option, the decision whether to engage in it should be contingent on various consideration, such as the severity of the undesired outcome, the risk to the individual in preventing it, their ability to prevent it, and their moral framework (e.g., deontological or utilitarian). As the saying goes, you need to pick your battles; every battle will take its toll and some tolls might not be worth paying for what they get you. Personally, I feel that if you are going to publish something that might wipe out humanity or end up with someone getting killed then by all means you should make a personal sacrifice to prevent it (if you can do something). On the other hand, if the current 'negative' outcome that you foresee is unethically (by some/most authors current norms) changing the focus of a paper (that 5 people will actually read) to look at one significant relationship (e.g., age and correlation to frequency of cycling) rather than another previously planned relationship (e.g., gender and correlation to frequency of cycling) that turned out to not be significant, and the new outcome is either (i) you will be sacked and the paper published without you, or (ii) nothing will be published and no-one will ever benefit from knowing about the significant relationship that you found, then I am more convinced that engaging in activism is not the way to go ***(at this stage anyway)***. And yes, I accept that my arguments here are flawed simplifications of what is a very complex reality, but I hope you can understand my general point and give some thought to it. Upvotes: 0 <issue_comment>username_5: One option is to make 'constructive' points. If your co-authors are (as many are) used to different degrees of p-hacking, they will probably not be too happy to hear that their results are unpublishable as they stand. If you were able to offer a solution to publish the results while *also* avoiding these bad practises, then few would object. The best way will probably to try out doing bayesian analyses. Here, (in some cases) non-significant results will also be interpretable and thus publishable. Upvotes: 1 <issue_comment>username_6: Describe exactly what you have done in the paper. As long as you are honest, the paper will be judged by the reviewers, editors and readers. Even people doing p-value hacking will have a hard time removing an honest description from the paper. If they tell you to remove it, ask them why and you will have the upper hand in the resulting discussion. Upvotes: 2 <issue_comment>username_7: Kenji, For the last few years, I have given a continuing education course called Common Mistakes in Using Statistics: Spotting Them and Avoiding Them. I hope that some of the approaches I have taken might be helpful to you in convincing your colleagues that changes are needed. First, I don't start out saying that things are unethical (although I might get to that eventually). I talk instead about mistakes, misunderstandings, and confusions. I also at some point introduce the idea that "That's the way we've always done things" doesn't make that way correct. I also use the metaphor of "the game of telephone" that many people have played as a child: people sit in a circle; one person whispers something into the ear of the person next to them; that person whispers what he/she hears to the next person, and so on around the circle. The last person says what they hear out loud, and the first person reveals the original phrase. Usually the two are so different that it's funny. Applying the metaphor to statistics teaching: someone genuinely is trying to understand the complex ideas of frequentist statistics; they finally believe they get it, and pass their perceived (but somewhat flawed) understanding on to others; some of the recipients (with good intentions) make more oversimplifications or misinterpretations and pass them on to more people -- and so on down the line. Eventually a seriously flawed version appears in textbooks and becomes standard practice. The notes for my continuing ed course are freely available at <http://www.ma.utexas.edu/users/mks/CommonMistakes2015/commonmistakeshome2015.html>. Feel free to use them in any way -- e.g., having an informal discussion seminar using them (or some of them) as background reading might help communicate the ideas. You will note that the first "Common mistake" discussed is "Expecting too much uncertainty." Indeed that is a fundamental mistake that underlies a lot of what has gone wrong in using statistics. The recommendations given there are a good starting point for helping colleagues begin to see the point of all the other mistakes. The course website also has links to some online demos that are helpful to some in understanding problems that are often glossed over. I've also done some blogging on the general theme at <http://www.ma.utexas.edu/blogs/mks/>. Some of the June 2014 entries are especially relevant. I hope these suggestions and resources are helpful. Feel free to contact me if you have any questions. Upvotes: 6 [selected_answer]<issue_comment>username_8: Lots of good answers already. However, in academia, it's always better if you can back up your position with a nice published reference. Happily, the question of p-hacking and replicability is being raised and addressed more and more often in different disciplines. I'll set this up as a [CW post](https://meta.stackexchange.com/q/11740/256777) to collect pointers to relevant publications we can use in discussions with coauthors that don't see the problem with questionable statistical practices. Everybody, please feel free to edit with your discipline's relevant articles or conference papers. **Psychology** * Here is an editorial by the Editor-in-Chief of [*Psychological Science*](http://pss.sagepub.com/), which is pretty much the mother of all psychology journals (Open Access. I also recommend papers cited by Lindsay.): <NAME> (2015). Replication in Psychological Science. *Psychological Science*, 26, 1827-1832. [DOI:10.1177/0956797615616374](http://dx.doi.org/10.1177/0956797615616374). * Here is a study in *Science* that shows that we indeed have a "replicability crisis" in psychology - a large collaboration set out to replicate 100 effects reported in well-regarded journals, and only 36% did replicate: Open Science Collaboration (2015). Estimating the reproducibility of psychological science. *Science*, 349, 6251. [DOI:10.1126/science.aac4716](http://dx.doi.org/10.1126/science.aac4716) Upvotes: 2
2015/12/20
1,208
5,206
<issue_start>username_0: I'm not a US national, moving to the US from a foreign country to start a new postdoc at a US university. Considering the initial expenses (especially the deposit for the rent, which is three times the monthly rent), it'd help me if I could get part of the salary in advance. My question is: would this be really a strange request to ask the department about it? Has anyone done it before with a positive result?<issue_comment>username_1: My university would give you a moving allowance that could be used to cover first and last months' rent plus security deposit on an apartment or rental house. You might start with asking what kind of moving allowance or support they offer and whether upfront rental costs like these are covered. This is not considered an advance on salary at my university, but is part of the cost of hiring. The university simply pays it to get good people to come work there. Upvotes: 3 <issue_comment>username_2: I doubt you'll be successful with this request, but it can't hurt to ask. Once you are in the U.S., grab your proof of employment letter and head straight to the nearest credit union. (That's a bank that serves its member-clients instead of trying to squeeze as much profit as possible from its customers.) With that proof of employment, they will give you a loan with a pretty good interest rate. Within six months you will have it paid off, and you will have started to build up a credit rating, which will stand you in very good stead if you ever want to buy a house in the U.S. If you don't have cash reserves to help you get set up when you arrive in the U.S., ask the department to ask around if there's anyone who can put you up for a week after you arrive. That will allow you to get that bank loan and also visit the possible housing situations you are considering, in person. If nothing turns up through the department, you might give couchsurfing.com a try. Upvotes: 3 <issue_comment>username_3: Yes, this would be perceived as a strange request and will very likely be turned down, and may even be frowned upon by certain people. You have identified a strange blind spot in the U.S. academic employment system -- U.S. universities do not really have a good understanding of some of the issues facing foreign scholars. I also think this ties in to cultural factors such as the strong U.S. belief in self-reliance and personal responsibility, and a negative bias towards people of modest means. Asking one's employer for an advance is generally seen as a somewhat negative thing that only very poor or desperate people would do. Let me offer a personal anecdote that could add some context. Some years ago I had a similar experience to you when I arrived in the U.S. as a postdoc. Like you, in my first few weeks I had to spend a nontrivial amount of money on a rent deposit and other necessary expenses, all with no financial assistance from my university. Things got worse when my first paycheck arrived and I discovered that they had entered my salary incorrectly in the system so the amount was lower than I expected. I asked for this to be corrected and was told that it will be and that the change will be reflected in the next paycheck, one month hence. Another upsetting thing was that I was asked by my department to pay $50 as a deposit for my office key, a key to the computer lab and a building entry card. As it happened, I had enough savings so all of this did not pose a great difficulty for me, but I knew this might not be true of all postdocs, and was very annoyed by the principle that I had traveled from halfway around the world to contribute my work and talent to the university and was being treated with such thoughtlessness and lack of consideration, to the extent that instead of them paying me for my efforts we were starting off with the money flowing in the opposite direction! When I mentioned these complaints to the professor I was working with, he immediately offered to give me a loan (which as I said was not necessary in my case). He then chuckled and recalled with nostalgia how when *he* was a young postdoc, moving to the U.S. from another country as a postdoc, he found himself short of cash for precisely the same reason. His own research mentor happily gave him a loan to tide him over until he started getting paid. So, he understood the problem very well. However, he did not appear concerned by this approach and thought it was very natural that this cycle of personal loans should continue as a solution to the problem... I should also add that part of the reason for my annoyance was a completely opposite experiece I had a couple of years before, when I arrived in France to work as a postdoc on a French government grant. On the day of my arrival I was given an envelope with 500 French francs in cash, and had prearranged housing with rent being deducted directly from my paycheck and no deposit necessary. Needless to say I appreciated very much this thoughtful attitude on the part of my French hosts. Life in the U.S. has many wonderful aspects to it, as you'll soon find out, but on this particular issue the comparison to France is not especially flattering. Upvotes: 2
2015/12/20
2,538
10,941
<issue_start>username_0: This question is inspired by a comment to [another question](https://academia.stackexchange.com/questions/60393/how-to-argue-against-research-misconduct-such-as-p-hacking-and-harking) where I asked for help on how to argue against P-hacking and hypothesising after results are known (Harking). Someone questioned the classification of these two behaviours as misconduct, and my general experience (around my close academic circle) is that many see these two activities as part of the way we do science. Here is what I refer to by P-hacking and Harking. *P-hacking* is when someone collects more data, changes the specification of a statistical model, change the analysis sample, or does other changes to the study until the results become statistically significant. Many of these things can be done with a justification, but the p-hacker (p-fisher) does them solely with the intent of obtaining a significant result. In doing so, he or she risks capitalising on statistical error (type 1 error) and publishing results that are basically a false positive. *Harking* is when someone generates a scientific hypothesis about the data after seeing the data. It would be innocuous if the researcher acknowledged the exploratory nature of the study and sought to confirm the findings in another set of data (or if he or she used cross validation techniques). It becomes a problem when researchers pretend that they had the hypothesis a priori and that the study was done to confirm it, hiding the exploratory nature of the study and conferring more strength to the results than they actually have. I am not asking for opinions on whether these things should or should not be considered misconduct. Rather, I would like to know the overall position of scientists in fields where statistics are used. I know of no survey on how scientists view these two behaviours, but I welcome answers that include such data.<issue_comment>username_1: The Declaration of Helsinki was updated in 2013 to "mandate" that research involving human subjects must be pre-registered. While not perfect, pre-registration prevents many of these statistical manipulations. The idea of pre-registration is that publicly stating your hypotheses and the details of how they will be tested in advance reduces questionable statistical practices. For example, changing the number of subjects, the inclusion/exclusion criteria, or the statistical model are not allowed. From my understanding, failure to comply with the Declaration of Helsinki would be considered unethical in Medicine, while in other fields the pre-registration aspect is being actively ignored. For example, articles are now being published with disclaimers like "this research was conducted in accordance with the 2013 Declaration of Helsinki except the study was not pre-registered". Upvotes: 5 [selected_answer]<issue_comment>username_2: In the US, for research funded by the NIH, "Research Misconduct" is a finding made by the NIH Office of Research Integrity. Other federal agencies have offices with similar responsibilities for research misconduct. The ORI web site has a "RCR Casebook" of fictionalized example cases used in training researchers about responsible conduct of research and research misconduct. It also has case summaries for every case where misconduct was determined and administrative penalties are currently in force (that is, cases where someone has been barred from getting NIH funding for some period of time.) In my reading through the training materials and case summaries, I haven't seen any cases where p-hacking was found to constitute misconduct. The cases are much more about outright fabrication of data or suppression of inconvenient data (e.g. by throwing out "outliers") to achieve a desired result. It appears that from the ORI point of view p-hacking is not (yet) considered research misconduct. More on what it takes to reach the level of "misconduct." The NIH recognizes three kinds of research misconduct: > > Fabrication: Making up data or results and recording or reporting > them. > > > Falsification: Manipulating research materials, equipment, or > processes, or changing or omitting data or results such that the > research is not accurately represented in the research record. > > > Plagiarism: The appropriation of another person's ideas, processes, > results, or words without giving appropriate credit. > > > p-hacking wouldn't fit under "fabrication" or "plagiarism." It might count as "changing or omitting results such that the research is not accurately represented in the research record." However, the ORI also requires that: > > There be a significant departure from accepted practices of the > relevant research community; The misconduct be committed > intentionally, knowingly, or recklessly; and The allegation be proven > by a preponderance of the evidence. > > > That's a pretty high standard. I think it would be hard to make the case that p-hacking is a significant departure from accepted practice and furthermore a researcher could claim that they didn't intentionally do the p-hacking. Upvotes: 3 <issue_comment>username_3: These types of data-straining behaviors are most certainly scientific *sins*, in the sense of being stains on one's conscience and reputation. My favorite discussion of such sins is the ["Nine Circles of Scientific Hell."](http://pps.sagepub.com/content/7/6/643.short?rss=1&ssource=mfr) Building a formal misconduct case around such data-straining would likely be very difficult, however, since they may quite easily and naturally arise from human propensities to fool ourselves. Many people who engage in de facto p-hacking are not aware of it, particularly when there are large volumes of data and powerful analytic tools in play. A beautiful illustration of both the problem and an appropriate scientific response is [the wonderful study that used popular fMRI methods to localize cognitive functions in the brain of a dead salmon](http://www.wired.com/2009/09/fmrisalmon/). Upvotes: 3 <issue_comment>username_4: There's a nice paper showing the prevalence of questionable research practices. For example, more than 65% of researchers who participated admitted to not reporting all dependent measures: <https://www.cmu.edu/dietrich/sds/docs/loewenstein/MeasPrevalQuestTruthTelling.pdf> Unfortunately, they didn't ask about p-hacking with things like multiple analyses. I don't think running multiple specifications is a problem, however, and I think it's quite natural to do so. To give a personal example: I once recoded a three-level variable in an experimental study into a binary response, as I was primarily interested in a treatment effect on one of the responses. This was transparently disclosed in the paper. However, a reviewer asked that I instead do the analysis with multinomial logit as that was more appropriate, even if it comes at a cost of making the coefficients more difficult to interpret (given the importance of communicating results, I think this is something that needs to be given some weight as well: the target audience would not be familiar with multinomial logit). So I redid the analysis as requested and dropped the logit regressions from the paper. It turned out that the results were now stronger than before, so this adjustment worked in my favor. Suppose I had realized on my own that multinomial logit was more appropriate prior to submission. Would that have been p-hacking? Consider another (fictitious) example. Suppose I have a regression with repeated observations from each participant and my treatment effect is significant if I use random effects at the individual level, but does not reach significance if I use fixed effects (or vice versa). In many analyses, either could easily be defended -- which one is "correct?" It can't just be the one I happened to choose first. It's a bit like using AIC or BIC for model selection: sometimes, one suggests the model provides a better complexity-adjusted fit, while the other suggests it doesn't. I don't think one is inherently better than the other. The solution is to ask for more robustness checks in the analysis. Instead of showing only a model with an interaction, for example, the model without an interaction could be shown next to it. (This is actually quite common in empirical economics, where the goal is to show that the result holds under many possible specifications -- and not necessarily that one has found the one true specification.) Pre-registering research makes sense in medical trials that are pretty straight forward: Group A gets Treatment X, Group B gets Treatment Y, and we compare outcomes on some dimension. If we compared the groups on 30 possible outcome measures, we'd likely see a difference in one of them. That obviously cannot be sufficient to establish efficacy. But social science research is much more iterative and just doesn't work in the same way. Moreover, most recent papers report multiple studies that back up a particular claim. While the effect may still not be real, there are so many other things that are likely to be more problematic than the model specification. For example, there has recently been some work on incentives for creativity -- e.g. do people become more creative when you pay them more. Imagine the hundreds of different ways you could define and measure "creativity" and the hundreds of settings you could test this in (individuals? groups?). All of those are judgment calls and we won't know what generalizes until there are a dozen of these experiments (ideally by many different researchers). Harking, it seems to me, is at least in part the result of how journal articles are written. They are not meant to be chronological accounts of one's thoughts, exploratory analyses, and eventual conclusions. They have to succinctly place work in a broader literature and convey the contributions of the present work. It may well be that prior to running the experiment, two theories would have been equally plausible -- but after running the study, only one of those is reaffirmed. If so, it seems like a natural stylistic choice to set up the study using the theories that are consistent with the results, then note in the discussion that the findings go against other possible explanations. (Including alternatives that the researcher may not have thought of, but that a reviewer connected the study to.) Upvotes: 2 <issue_comment>username_5: A counter example is [Grounded Theory](https://en.wikipedia.org/wiki/Grounded_theory). It is a methodology used to build a theory out of data where you constantly try to match a hypothesis to your data. So your question about hypothesizing after the fact is definitely not unethical in this case! It is worth noting that this is probably one of the few exceptions to the problems that the OP and other answers have been aimed at. Upvotes: 2
2015/12/20
1,781
7,220
<issue_start>username_0: Let’s say a professor has seen that the average score for the midterm exam was a grade and a half higher than it was in previous years. Because of this he gained a reasonable suspicion of cheating (he has been teaching the course for several years in the same way and this is the first time he has received such high scores). He looked at the score distributions and found it to be bimodal: [![enter image description here](https://i.stack.imgur.com/wpa2g.png)](https://i.stack.imgur.com/wpa2g.png) He went on to assume that, since this distribution wasn’t like his normal distribution he had received on his test scores in previous years, one third of all his students cheated on the mid-term. [![enter image description here](https://i.stack.imgur.com/KJ2en.png)](https://i.stack.imgur.com/KJ2en.png) Now here comes the question, is it ethical and reasonable to assume that one third of the people in his class were cheating? Maybe the distributions were different and bimodial because some students studied harder (causing peak at better scores), while others didn’t (peak at lower scores). Are professors allowed to make everyone retake the test because of this data (even the ones who took it honestly). This punishment seems like he is punishing the supposed two thirds of the class that took it honestly. The professor goes on to claim that those students who had significantly higher scores than in their previous test must have cheated! Are professors allowed to that? What if they got a bad score on the previous test because they didn’t study and they got a good score on the mid-term because they did study? Also what if a student has a good score on the previous test and the mid-term because he cheated on both? Or what if the student was honest and received a good score on both test because he is a hard working student? So my question, **Is it normal practice and ethical for professors to accuse someone of cheating just by looking at the student’s test scores?** PS: This didn't happen to me. I just heard about it from a friend.<issue_comment>username_1: Statistical tests can indicate that there may be cheating. But, to accuse students, more proof is needed. Perhaps the statistics indicated that there may be a problem and closer investigation showed that someone got access to the exam beforehand (but, for investigation reasons the students were not told)? In short, per se, statistics is not sufficient to prove anything, but it can direct attention to finding more compelling evidence (which you may not know about at this stage). That being said, I have seen enough variations of this sort in capability of cohorts that to use just statistics as evidence for cheating is questionable practice. Upvotes: 4 <issue_comment>username_2: First, as you already noted, the better scores and the bimodal distribution may have other reasons. For example: * One of three TAs for the course did an outstanding job. * The professor reused questions from previous exams (perhaps without noticing) and one third of the students informed themselves about previous tests. Most importantly, the following thing is odd: It’s highly implausible that the rate of students who decide to cheat on their own increases by that amount from one year to the next. Thus, they would have to decide for cheating together or due to some common motivation. This in turn only makes sense if the cheating is a collaborative endeavour or if somebody is helping the cheaters. Now, while such scenarios are not unthinkable, they are usually as likely as possible explanations that do not involve cheating (your mileage may vary depending on the circumstances). Moreover, even such a scenario happened, just repeating the exam is unlikely to solve it. Therefore the professor should first find out what actually happened. So, the statistics may point at something being unusual, but without further investigations you cannot say that it’s cheating. Moreover, even if it is cheating, statistics can only tell you that it happened, put cannot point to any individual involved in it. To address your individual questions: * > > Are professors allowed to accuse someone of cheating based on a general increase in test scores? > > > > > --- > > > The professor goes on to claim that those students who had significantly higher scores than in their previous test must have cheated! Are professors allowed to that? > > > Technically, yes. Almost everybody can accuse everybody else of everything as long as we are not entering the domain of libel laws. Will it be successful or a good idea? Probably not, at least in any reasonable university or jurisdiction. And that’s not even considering that he has not one but many students against him. * > > is it ethical and reasonable to assume that one third of the people in his class were cheating? > > > Assuming something cannot be ethical or unethical; acting upon an assumption on the other hand may be, but that depends on the action. * > > Is it normal practice and ethical for professors to accuse someone of cheating just by looking at the student’s test scores? > > > Normal practice – I never heard of it. Ethical – certainly not. There are several possible reasons for a good individual test result, e.g., hard work and plain luck. The professor should be aware of this and thus the accusation is unfounded and hence unethical. Upvotes: 7 [selected_answer]<issue_comment>username_3: The OP aknowledged in the comments that the question was sparked by a specific incident, the [<NAME> case in UCF 2010](https://www.youtube.com/watch?v=rbzJTTDO9f4). Viewing the video, Quinn had not just Statistics, but both *physical evidence and witnesses*: a student had left at his office door a "test bank" answered, and many students communicated that other students were bragging about how they had gotten hold of the test bank and so had all the answers prior to the exam. Moreover, Quinn didn't just "accuse the class of cheating": he had already negotiated with the school "full immunity" to those that will admit to the deed... At least 200 students admitted the cheating after [they were offered the equivalent of full immunity](https://web.archive.org/web/20150928063524/http://www.telegraph.co.uk/news/newsvideo/weirdnewsvideo/8140456/200-students-admit-cheating-after-professors-online-rant.html). I don't know the man and I have nothing to do with Florida, or with the US education system for that matter. But based on the information I found, the Quinn case certainly does *not* qualify as an example characteristic of the answer asked by the OP. Upvotes: 5 <issue_comment>username_4: Punishing students without any concrete proof just because they are good on a test once is neither ethical, nor is it good teaching. Even if these students did cheat, they are shown clearly that it doesn't matter how they get good grades, they will be punished for getting good grades anyway. **It will be very hard to find even a single teacher who won't get more evidence than statistics before punishing a student for cheating.** That said, statistics are what cause teachers to start looking for said evidence. Upvotes: 3
2015/12/20
1,245
5,087
<issue_start>username_0: At my university, at the end of every course, professors have to submit an anonymous survey to the students. In this survey students can write the good and bad aspects of both the course's program and the professor's teaching skills. Sometimes classes are of very few students (even under 20), so a problem arises: it is very easy to link a survey to a particular student, just by comparing his/her calligraphy in the exam with the surveys. So my question is: **should a student mitigate his/her judgment in the survey for fear of a harsher revision of his exam?** Note, I'm not speaking about offensive and/or "off-topic" judgments (i.e. clothing habits), but of legitimate criticism.<issue_comment>username_1: First, a couple of remarks for those not familiar with the Italian educational system, to provide more context (the OP studies in Italy according to his user profile): 1. In Italy, there are really no final exams: exams are distributed along the year in 3-4 sessions, and students can decide to take an exam much later than the end of the course. Thus, professors always receive students' comments before the exams (there's no "after"). 2. The kind of survey described by the OP was common in my university up to a few years ago (now, it's managed through a learning management system), and -- yes -- professors received handwritten forms (e.g. I read the *You must die* remark described [here](https://academia.stackexchange.com/a/49915/20058) on a handwritten form). The survey should be typically handed back in class, so students can't hand it back after the exam as suggested in one of the comments. In the edit below I explain in more detail the mechanism. > > Should a student mitigate his/her judgment in the survey for fear of a harsher revision of his exam? > > > From my experience, I've never seen anyone wasting time in trying to pair the calligraphy of the remarks with that of the exam papers (I surely didn't), and I'd consider retribution extremely rare (though jerks exist). Therefore, I wouldn't bother mitigating your judgment, but be polite and keep it at a professional level. **Note: How evaluations used to work in my university before the introduction of a learning management system** *(and how they still probably work in the OP's case)* Teaching evaluations are managed by a university office (not a departmental level). In the paper era, toward the end of the courses, a professor would receive two sets of forms: one contained a questionnaire prepared by the competent office; the other could be filled with free comments. During one of the lectures, the professor would hand the forms to the students and would give a (20-30) min break to fill them in. At the end, a volunteer among the students would collect the two sets and put them in two different envelopes. The envelope with the questionnaires would be closed, signed and delivered to the office by the volunteer; the other envelope with the free, handwritten comments would be handed to the professor in class. So, the professor could read the comments before the end of the course and before any exam. Nowadays, we can still read the comments before the exams (as I said in the first remark, there's no way out for this), but they are no longer handwritten. Upvotes: 4 [selected_answer]<issue_comment>username_2: I think the best strategy here depends on your knowledge of the instructor. If they are sincere, earnest, and care about students, giving them useful information is a good thing. If they are vindictive, bitter, and dislike students, you are wiser to use other avenues to communicate this to higher-ups. This is universal. The trickier case is with instructors who for some reason you have little personal "intelligence" about... Perhaps better safe than sorry, so treat the unknown as a worst-case scenario? This is a subtler, more philosophical question, about your own approach to risk and greater good. In any case, even the most angry, bitter faculty I've know I think would not do anything substantive about grades for a student who's said something negative (or positive). I do suspect that it could be worth a "plus" or "minus", despite their claims to the contrary, but not more than that. I do recall in my own experience having "annoyed" some instructors by non-compliance and patchy attendance... Upvotes: 3 <issue_comment>username_3: I'm not sure if every school does this, but at my school, we did our teacher evaluations before the final exam. The professors, however, were not allowed to view the comments made on the survey until *after* grades were posted. Since you say your criticism is legitimate and could help improve the course, I think you should make the comments you have about the course as you see fit. If you don't, the professor won't be able to know the criticisms and will not be able to improve the course to fix the problem(s). Many professors at my school read the comments and took them to heart as ways to improve their courses. Hopefully your professor feels this way as well. Upvotes: 2
2015/12/20
1,068
4,855
<issue_start>username_0: I am currently a traditional sophomore math major at a large university in the southern USA. I'm interested in attending graduate school once I graduate, but my university has a weak math program, and I am afraid it will severely effect me in the future. For example, the department only routinely offers the most basic undergraduate courses, such as a two semester sequence of real analysis, one complex analysis class, basic topology, and algebra. Starting my junior year, I will be running out of math classes to take, and it will be hard to construct a schedule that makes me a full time student. By my senior year, directed studies will be my only option for classes. Some solutions I've thought of * Apply to math summer programs and other enrichment programs like REUs * Transfer to a different school * Get a relevant minor or double major The problem with transferring is that my upper division math classes are not transferable to other schools in the state. I would have to retake every math class I've taken other than calculus, and it would take me at least 3 additional years at another school. The only constructive advice I've received from faculty at my school is to graduate in three years. I do not like the idea of graduating early because I've simply exhausted the departments offerings. I also do not think a PhD admission committee would look favorably upon that. What other courses of action should I look into?<issue_comment>username_1: "The problem with transferring is that my upper division math classes are not transferable to other schools in the state." Actually, this is often at the discretion of the new university's chair of undergraduate studies. If you ask for the classes to transfer and provide evidence of what you learned, such as copies of your exams, you may be granted credit. Transferring to an institution with a history of successful placement in graduate programs would be a wise long term move. I cannot provide advice about the financial aid you might receive inside or outside Georgia without more information. Try it and you might get sufficient aid. Graduating early is also good because it will allow you to finish school earlier and get the job you want. I think admissions committees would look favourably on early graduation because it shows discipline and organization. However, you should arrange to have evidence (such as subject GRE scores and letters of recommendation) that you have learned enough to be successful in the graduate program. Upvotes: 3 <issue_comment>username_2: It sounds like you have a good start on the most important thing: take advantage of every opportunity at your current program and excel at all of them. Even from a weak undergraduate program, if you can come out with multiple letters saying that you are the strongest student they've seen in X years and have the credentials to back that up, you'll have a good chance at solid graduate programs. Your situation means that standardized tests are also even more important. Math is nice because there actually exists a test (math subject GRE) that most schools put significant weight on, so make sure to take it very seriously. Doing well on the math GRE takes away a lot of the uncertainty that admissions committees will have about your application due to the institution. Assuming that you don't transfer, some other options: * Get to know faculty and spend that extra time doing research. Proven research experience can make up for a lot of coursework, and you'll hopefully get a much stronger recommendation letter as well. This is important to do no matter what kind of institution you're at, but is probably extra important at weaker programs where your application can't rely on reputation and coursework. * Consider graduating early and getting a Master's degree at a strong program. This isn't always financially feasible, but if it is then it's an option to strengthen your coursework and resume. Availability of strong master's programs can depend on the field, but it's worth looking into. * Graduate early and get a job doing research. Again, availability may depend on field (I hear that these aren't common in math), but if you can get the right job it can be a great boon to a resume and help generate another good recommendation. Bonus points if you can find a job near a good university and take a couple classes on the side. Upvotes: 5 [selected_answer]<issue_comment>username_3: Even if you can't transfer credit for your upper level courses, you almost certainly wouldn't need to retake them. Instead you could take different upper level courses until you have enough upper level courses to graduate. Typically prerequisites are at the discretion of the professor teaching the course, and mathematicians are typically quite flexible about them. Upvotes: 3
2015/12/20
2,861
11,974
<issue_start>username_0: I have the opposite problem to [this question](https://academia.stackexchange.com/questions/42151/how-should-a-social-scientist-deal-with-envy-of-disciplines-that-are-more-quanti). Often when I talk to someone from a 'hard' scientific field I am 'taught' how statistics work or how programming works. It is a bit like being mansplained but by a hard scientist (hardsplaining, would that be a word?). Once someone was surprised that I know what the [Runge-Kutta method](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) is, and I have lost the count of how many times I have been lectured about stuff like the normal distribution and inferential statistics. Now, I work with quantitative social science, using statistical methods that many would consider quite advanced (e.g. exponential random graph models for social networks, panel econometrics, simultaneous equations) and had the necessary mathematical training for that. I am also very familiar with several programming languages. I usually write a lot of code in **R** or Python, besides being proficient in statistical scripting languages like Stata. I also know a good deal of NetLogo and Java, which I used when I did agent-based modelling. I would write my stuff in Sweave and LaTex if most journals didn't ask for a word document. My understanding is that precisely because of the observational nature of most social science data, those of us who are into quantitative methods are forced to learn very advanced techniques to deal with issues such as selection bias and unobserved heterogeneity. Furthermore, the intrinsic 'messiness' of social data means that we have to be quite good at data management, usually learning a programming language or two in order to clean our datasets. Moreover, the emergent nature of social phenomena has motivated many of us to use multi-agent simulations in our work, demanding us to learn how to program. Yet, I get patronised by the person whose randomised experiment allow them to get away with a t-test. How to react when that happens without sounding too defensive? I know that generations of armchair sociologists theorising about the social construction of this and that probably created this stereotype of the mathematically inept computationally illiterate social scientist, but I believe that this image doesn't reflect a great share of those in my field. EDIT 1: Given the close vote, I'm adding this clarification to what I'm asking. I want to know of strategies, possibly by others in a similar situation, to assert their research credentials or skills in a friendly manner in a social situation where the phenomenon described happens. The type of prejudice that I describe sometimes leads my opinion or ideas to be disregarded because of judgements made based on an incomplete image of my field of study. I believe that there are others there in a similar situation (Economists, for example), who may have strategies to cope with it. There must be a nice way to convey one's competence past the initial impression caused by the stereotype.<issue_comment>username_1: Given the example you indicate, how about: "Yeah, a t-test is probably just fine if one has good data. But if you have really bad, realistic, data, you need much more sophisticated methods such as ..." Ideally mention computational methods the techo (the "techno-macho", or hardsplainer in your language) is not likely to know or understand. If they had only sought to show off their superiority, that'll make them go away. And if the person is seriously interested to learn, then, that's fine, too. In this case, go forth and explain. Upvotes: 3 <issue_comment>username_2: > > I get patronised by the guy whose randomised experiment allows him to > get away with a t-test. How to react when that happens without > sounding too defensive? > > > I think this is the key aspect in your question. Once the "hardsplainer" starts patronizing you, you are on the defensive. And there is really no good way out of this situation once you *react* by starting to defend your methods. * *Sometimes* the hardsplainer will realize that his assumptions were erroneous and that you may indeed know more about stats than he. * *Or* he will in turn get defensive and start nitpicking your methods, probably getting in deeper and deeper water as he is discussing stuff he may not know much about. This is not a good conversation to have at social gatherings. I'm afraid the second possibility will happen rather frequently, simply because people are not good at revising preconceived impressions. So I'd recommend that you nip the problem in the bud, by not allowing your interlocutor to, in fact, preconceive the impression of "look, a social scientist, who probably doesn't know anything about statistics". Specifically, when you discuss your work, invest half a sentence to name-drop your analysis techniques. > > I'm looking at how foo relates to bar. Because I only have > observational data, not experiments, I use econometrical panel data > models, and I find that... > > > If you hint that you use advanced models right before the hardsplainer can get the wrong impression that he can lecture you with fundamentals, he will be stopped cold. (Of course, you don't want to overdo it to come across as an arrogant know-it-all.) --- Yes, it would be nice if this were unnecessary, because people didn't have the preconceived notion that social scientists are inept in terms of statistics. Unfortunately, this notion does have a basis in facts. I do statistics for psychologists, and I see that while they do get a solid grounding in statistics, they do frequently misapply models, or interpret them incorrectly, or don't understand why [p-hacking is a Bad Thing](https://academia.stackexchange.com/q/60401/4140). Then again, some hard scientists do suffer from the delusion that being an expert in some hard science means that they automatically also are experts in statistics. Upvotes: 4 [selected_answer]<issue_comment>username_3: There is an important aspect of this dilemma which I did not see stated in your question: *why is the opinion of the "hard-splainer" important to you?* Depending on the answer to this question, there are a number of different approaches that you might take. Although I am in a "hard" field myself, I face similar dilemmas in my interdisciplinary work, as I find that some researchers often dismiss or misunderstand computer science as "just data analysis," since their own experience with it has been largely limited to simple uses of Matlab, Excel, or specialized data analysis programs. I have found that it is useful to develop a spectrum of responses, depending on my degree of investment in the interaction. From least to most investment, these are approximately: 1. **Nod and smile.** When dealing with a random boor in an airplane or at a conference cocktail hour, I may simply choose to not engage. Why should I care what a fool believes when they cannot even bother to draw breath long enough for me to speak? So I nod, smile, say something politely vague, and then go to refill my drink / take care of some work / whatever. 2. **Turn the tables.** If the person seems worth talking to and capable of listening, but is uninformed, then I'll turn it into a teaching moment: > > "Ah, it sounds like you're suffering from some common misconceptions about [subject]..." > > > It's good to spend some time with philosophy of science and the history of your field, in order to understand why things are the way they are. I have some favorite examples and anecdotes, mostly having little to do with my own work, which help illustrate the base pop-science-level points I'd want to make. Don't be defensive: instead, have fun sharing your love of the field and its complexities, and your conversation partner is more likely to enjoy themselves too and actually learn something about your field. 3. **Face the problem head-on** For people whose opinion *really* matters to me, such as potential collaborators, program managers, and decision-making committees, I often actually have pre-prepared slides or, in some cases, actual published papers. You don't have to listen to a lecture, but instead say something like, "It sounds like you're concerned about [issue]," and then head for your careful "101-level" explanation of how that issue is approached in your work. Preparing material of this type can actually be a surprisingly valuable exercise for yourself as well: I have found that many of the assumptions of my field, while valid, have a much more complex backstory than I commonly think about while working within those assumptions, and taking time to understand them has opened up new knowledge and opportunities for me. Upvotes: 3 <issue_comment>username_4: Let's say you and a hard scientist have just struck up a conversation. Before saying anything about your work or position or department, make sure to find out a fair amount about what he does, and what statistical or computational methods get him excited, so you can tailor your remarks to his own provincial world view. (I'm saying *he* on purpose, because I suspect the behavior you described is more common among men.) If he peters out and pushes you for information before you've gotten enough from him, say something vague or even misleading, to throw him off the scent of social sciences, so you can get him to talk excitedly about himself a little longer. When you're ready to start talking about yourself, **say something that allows him to view you as a member of his own subtribe**. Your remarks might be a little bit disloyal to your fellow social scientists, but hopefully you can live with that, since this is an informal one-on-one conversation. For example: "My work is inter-disciplinary. I use such-and-so techniques that have been long established in the hard sciences, and apply them to interesting problems in the social sciences." Raise your eyebrows a little as you pronounce the phrase *social sciences*. It needs to have just a whiff of a skunky odor in the way you say it. If you want to go a little further -- "It is remarkable how much can be accomplished by bringing rigorous statistical and computational methods to bear in previously unplowed fields." (If the conversation eventually leads to criticisms of your area that are uncomfortable for you to hear, you may then safely establish your respect for your colleagues, for example: "You are thinking of the social sciences of 20 years ago. Actually, there are many researchers engaged in rigorous work in the social sciences currently. I am not the only researcher in my field applying sophisticated mathematical and statistical techniques to complex social science questions." Then you could rattle off some names. Not to convey meaning necessarily -- but to intimidate.) Please do not take the arrogance and lack of humility of some hard scientists personally. It's really their problem, not yours. Upvotes: 0 <issue_comment>username_5: I think this is why some people in biology started calling themselves mathematical/computational biologists. Then you also have bioinformatics people. Mathematical finance is a discipline. Etc. I'm pretty sure people changed their title for this very reason, and I think the quickest change is to change your informal title. If you just say you're a "mathematical/computational/statistical/quantitative \_\_\_\_\_\_\_\_\_\_" it likely sparks interesting conversation towards the direction of your research. And if anyone is being a patronizing idiot, throw it right back at them. They're using a t-test? "Why wouldn't you be using M-estimators or a Bayesian model with t-likelihoods with the degrees of freedom as a hyperparameter? Your data seems like it would have outliers and so such basic statistics probably don't apply." If they don't get the point, they likely aren't worth talking to. Upvotes: 0
2015/12/21
321
1,360
<issue_start>username_0: I am a Nepali citizen and I have completed a 3 year bachelors from my country. I am planning my masters degree in Germany. Can I directly start my masters or do I have to do some courses?<issue_comment>username_1: Schools in Germany typically have an "International Office" or some other similar organization whose responsibilities include the "recognition" of foreign degrees earned outside of Bologna Process countries. If they approve of your bacehlor's degree as functionally equivalent to theirs, you should be fine. They may also ask you to take some additional number of classes to complete the equivalency. However, if too many credits are needed, they may refuse the request. Upvotes: 4 <issue_comment>username_2: Good question! The Bachelor degree in Germany is also three years, as far as I know, so I see no reason for a problem. Best way to proceed is to write to a professor you want to work with, and ask him. Be sure when you write to mention the project title for your "bachelor thesis". This is a six-month research project carried out by German students intending to study for a Masters. If you did any kind of big research project in your third year, for which you wrote some kind of report (anything around ten pages or more) you can quote that as equivalent to the Bachelor thesis. Good luck :-) Upvotes: 1
2015/12/21
1,059
4,737
<issue_start>username_0: I am applying for a Ph.D. in Mathematics in U.S. universities from India. The application process requires me to register people who can comment on my capability for research and recommend me for the graduate school. I have heard that since most applicants for (pure) mathematics Ph.D.s generally do not have publications to show, the recommendations are crucial in the selection process. Hence, I have three questions about selecting my recommenders wisely. 1) Do you feel that recommendations coming from a Professor carries more weightage than the ones coming from an Assistant Professor? My question is that, does the seniority of my recommender add any extra weight or credibility to the recommendation? 2) If a senior Professor gives me 'above average'(or 'good') recommendation while another Assistant Professor ( just two years on the job, not very famous but a very active and well cited researcher) gives me 'excellent' recommendation - which one do you think will support my application better? 3) Which recommendation should I prefer - from a person with whom I worked on a project for around a month, or , from someone who has taught me two courses over two semesters?<issue_comment>username_1: Choosing recommenders doesn't exactly have clear-cut rules, so it's going to be impossible to give specific advice without detailed knowledge of your exact situation. That said, some guidelines that I found helpful: * Try to cover all your bases. I'd recommend having at least one professor with whom you worked with closely on a research project or something similar, and at least one professor who knows you well from class(es) in which you excelled. Remember that you generally have three recommendations to work with, so make sure all of them are useful. * In general, you want the best letters possible, regardless of rank or prestige. As long as they're faculty and actively do (or even did) research, go for your best letters. Again, this is where you can cover your bases a little with multiple letters, hopefully not all of your potential recommenders are brand-new faculty. * Don't be afraid to simply schedule a meeting or send an email and ask them for their thoughts. In my experience, most faculty are excited to see their students aspire to graduate school and are very willing to talk about it. You should at least be able to ask your potential recommenders if they can write a *good* recommendation. * If a recommendation letter doesn't add information to your application, then it probably isn't helpful. If a professor who taught you can pretty much just say the grade you got in the class, then it doesn't add anything over your transcript. Lots of applicants make this mistake, assuming that an A in one or two classes is enough to get a good letter. You can get great letters from professors who only taught you, but make sure that you actually gave them something to talk about in those classes. I should also say that most people shouldn't have to worry too much about these considerations. It usually comes down to picking the best third letter writer or something. But if you find yourself trying to parse through many decisions in selecting letter writers, then you're probably overthinking it or reaching down too far. Ultimately, developing good potential recommenders is the hard part, and hopefully that has already happened. Upvotes: 3 <issue_comment>username_2: I’ll give a perspective as a professor in a Physics department in the U.S. —- I would guess that Math is quite similar. About #2: An “excellent” recommendation from the assistant professor is far better than a “good” one from a senior professor, and even more so if the assistant professor is active and well known. The more a recommendation differentiates you from the “average” student, the better, so excellent recommendations count for a lot. The status of the recommender is important, but just being senior in itself is not as big a plus as being well-known and active. About #3: The recommendation related to the project is more valuable than that of the class, unless you did something in the class (e.g. an independent project) that was really special, and that isn’t reflected in your grade for the class. Note that the admissions committee already has some insight into how you do in courses, from your grades. How you’ll do in research is something they want information about. About #1: see above — seniority per se isn’t that valuable. It’s true that more senior people often have very good reputations, and often have a broad context in which to evaluate you, but being senior *in itself* doesn’t carry a lot of weight. (I will claim that this is generally true in the U.S.) Upvotes: 1
2015/12/21
779
3,144
<issue_start>username_0: I am a final year student in India and have been selected to do my final year project abroad in a German university. The professor sent me a programming assignment to do 3 weeks ago; the assignment was to be on JSF (Java Server Faces). Instead, I misread it as JSP (Java Server Pages) and have been working on that ever since. So recently I re-read the initial email detailing the assignment and I realized my blunder. I have to submit the code with JSF but I will need an extension from my professor. I want to write a professional apology letter but I am unable to find the right words.<issue_comment>username_1: The first key question that I think you need to consider is the reason why the professor has asked you to do this assignment. Is it for your educational benefit, or is it so that the professor can evaluate your capabilities as a programmer? Often, when dealing with students, especially foreign students, a professor will pose assignments as a way of gaining more insight into the students *actual* as opposed to "on paper" knowledge and skills. You are also in a situation where even if you have been working hard for the past few weeks, the professor has no way of observing that, and may gain an impression of you as a person who slacks off and makes excuses. Given these two statements, my recommendation would be to send a short email containing the following three items: 1. A short statement of your mistake, just as you gave here. 2. A short polite request for an extension, including an estimate of how many days you will need to finish the assignment correctly. 3. An attachment of all of the work that you have produced thus far (or link to it, if is is more an ~1MB in size, to avoid possible email bounce problems) Attaching the work that you have done will allow the professor to make use of it in deciding how to interpret your mistake and request for extension. Assuming you've been doing solid work, they will be more likely to a) not evaluate you as a slacker and b) actually grant the extension. Moreover, if the specifics of the assignment was less important than the quality of the work you produce, the professor might even change the assignment to match what you have done. Upvotes: 3 <issue_comment>username_2: > > <NAME> Professor G, > > > ach, <NAME>, after working intensely for the last three weeks on the interesting assignment you gave me, I just realized I misread JSF (Java Server Faces) in your instructions and thought it said JSP (Java Server Pages). If you would like to take a look at what I did in JSP, here is a link: [link]. > > > I apologize for my mistake. Would you like me to re-do the assignment in Java Server Faces, as I believe you originally intended? I estimate it would take me about two weeks. May I send it to you on [date]? > > > Sincerely, > > Student > > > (Don't wait to hear back before you get started. Prof. G will most likely say it's fine, go ahead. Or Prof. G might say Don't bother, I like what you did. But I can tell that you'll want to re-do it anyway, for your own satisfaction -- so go for it.) Upvotes: -1
2015/12/22
1,399
5,895
<issue_start>username_0: I am pursuing a B.S in Math at UCLA, which is in the United States. My academic performance is above the average. My major GPA is 3.73 and my GPA for upper division courses is 3.56. My GRE General is 327. I might also pass the Japanese Language Proficiency Test Level N1 held in this December, so I might be able to apply to programs taught in Japanese. I am thinking about applying to some master programs in Japan such as information science in Tokyo University. The only thing that troubles me is that almost all universities require you to write research plan and contact the professor whose research interests you. This is quite different from US. As someone without research experience, how should I deal with this situation? Is there any taught masters program in Japan? Thanks in advance.<issue_comment>username_1: If you're talking about the [IT Asia Program at the University of Tokyo](http://itasia.iii.u-tokyo.ac.jp/admission.html#notice) then you don't need to contact faculty in advance: > > 21. If you already have any specific faculty members in mind under whom you would like to conduct your studies, please list them in the space below (in order of preference, if there is more than one). Although your wishes will be taken into consideration when assigning advisors, please note that they cannot always be granted. > > > However, in general the assumption is that the student will take some initiative in finding an advisor. Most of the faculty affiliated with these programs will speak (or at least read/write in) English so you should e-mail faculty of interest ahead of time. This is true as a general rule, not just with University of Tokyo. Upvotes: 1 <issue_comment>username_2: I'm currently on a Masters program in Japan, and I also didn't have any experience with research at all when I had to write a research plan, which made me very stressed. > > As someone without research experience, how should I deal with this > situation? > > > There is no other way. **You'll have to write your own research plan and contact your potential advisor**. It is pretty hard to write a research plan when you have no clue, but here is what I did: 1. Find something you would like to research that can actually be done at the lab you pretend to go. Read about previous publications on the laboratory's homepage, read about your potential advisor's areas of interest. *You should know how to justify why you chose that lab* as your potential advisor will probably ask you during email exchange. 2. Now that you have something that can be done at where you want to go, you'll have to *justify why you chose that theme*. Try to write about the potential applications of [your research theme] or how that will save the world or make everybody happy. Being able to strongly justify why your research is important is also essential. 3. Describe concretely which approach you will take to tackle [your research theme]. Of course you might have no idea, but you should at least have a hint from the point you researched the stuff that can be done on the lab. Write a rough schedule of about how much time your research will take (literature review, experiment designs/simulations, analysis, time for writing up your thesis etc). 4. Your research plan should have at least introduction, objectives, methodology (possibly with a schedule), and references. Include references from japanese authors if possible. Remember that there is some flexibility, and once you are here you might find other research themes that are also interesting and/or more feasible than the one you initially planned. Contacting your potential advisor is also a very important step. Professors from prestigious universities receive a LOT of emails and yours may just be easily ignored depending on your attitude. Depending on the university's guidelines, you'll have to first introduce yourself to the university's international office ("Kokusai-ka" or 国際課), which will then contact your potential advisor. You should be very humble and polite. Of course they might not expect it from an international student, but it will give you a positive image. Overdo it and you'll look desperate. In the first email, apologize for the sudden contact, introduce yourself, write up which university you're coming from, from which major. Say that you've read [potential advisor]'s articles about [research theme] and that you're interested in doing research under his/her guidance, and ask if that is possible (the lab may be full capacity). Do NOT attach anything in the first email, it's suspicious and will look rude (like you're pushing something for them to do for you). If you are lucky, you might receive a reply within 2 weeks. If it is positive, you may be asked to send your research plan, academic transcript, TOEFL or JLPT certificate and possibly be asked some questions via email or Skype. At this stage you will *probably* be fine. In case your first option gets rejected, write another email from scratch for other potential advisors (read his/her papers, etc - they suspect when the email looks like it had just one or two names changed, it suggests that you send the same email for many other places and conveys that you are desperate). In any case, be prepared not to receive any reply at all. I tried contacting over 15 potential advisors/laboratories and got replies from 3 of them. I got rejected from one because the lab was full, and from another one because I didn't have a JLPT certificate. > > Is there any taught master program in Japan? > > > I'm currently in a taught masters program in STEM field. We have many lectures, tests, appointments, seminars, etc. pretty much like undergrad, except that grading is mostly done by assignments instead of tests. Of course, it will depend on your university and department. Upvotes: 4 [selected_answer]
2015/12/22
4,581
18,285
<issue_start>username_0: Many a times, I study some topic or research paper and make my own notes (it could be figuring out the math in paper, or something else). But after say 7-8 months, these notes just pile up on table and when I look at them I don't think I will be using them for any future work. But I also feel like I'm trashing my study, and the effort was pointless if I dispose of it. Is it okay to dispose of them? What strategy do you have about disposing of written notes?<issue_comment>username_1: You could always scan them and store them electronically if you feel that some of these notes may be beneficial in the future. I personally kept the notes that I felt would be beneficial. Classes related to my major and classes I had taken an interest in, I would keep. Notes I had that didn't seem useful long-term, I would recycle. I would use this same strategy for your notes for papers. If it could be useful for a future paper, keep it. Otherwise, dispose of it. Upvotes: 5 <issue_comment>username_2: It may help you to understand that [much of the value of those notes has been the very act of writing them](http://www.scientificamerican.com/article/a-learning-secret-don-t-take-notes-with-a-laptop/). As such, you may find that you have less compunction against throwing them out. Personally, I maintain an "aging pile" for notes, in which I keep them until they stop feeling relevant. For some things, that's a week; for others it was a box in my closet and a decade. You can also remove the physical clutter aspect by scanning and archiving in something with cloud storage: you'll be trading physical clutter for electronic clutter, but I at least find that electronic clutter is much easier to ignore. Upvotes: 7 [selected_answer]<issue_comment>username_3: Scan whatever you may need again at some point so you have an electronic copy. If possible, OCR it so you can search. This point has been made by others before, but here's what I would do going forward: try to [take as many notes as possible directly in a big txt file](https://academia.stackexchange.com/a/51807/4140), so you don't need to scan and OCR it afterwards. You can always search for relevant keywords in your txt later. And you may want to browse through Personal Productivity, specifically its [note-taking tag](https://productivity.stackexchange.com/questions/tagged/note-taking). Upvotes: 4 <issue_comment>username_4: When I make notes on a paper that I've printed, I make them *on* the paper. This means in the margins, on the backs of sheets (single sided printing has its uses). I haven't had to but extra sheets can always be added to the back. Then I can file the paper in some way (normally a disorganised folder on a particular topic). The equivalent if I'm working electronically (not preferred, but useful on a train) is to maintain a text file (which would proably start as .txt, but if any complicated equations are required, would contain LaTeX and would only need a few lines to be compilable. This would be stored in the same folder as the .pdf and the .bib for the paper. Even though I might be reading the papwer on a tablet and making notes on my laptop, papers are stored on dropbox, so everything is kept together. I'm naturally disorganised, a piles-of-paper sort of person and this works for me. Sketched-out ideas have to be typed in or photographed for long-term storage or I'll lose them. Upvotes: 3 <issue_comment>username_5: This doesn't help for the past, but in the future write your notes in a [journal](http://kit-shop.de/cosmoshop/default/pix/a/n/K3-510_Notizbuch+kariert+Normalbild.2.jpg). And put them on a shelf when they are full. For important notes, use colorful [page markers](https://media.schaefer-shop.de/is/image/schaefershop/shop120/info-haftstreifen-page-marker-img_SI_162170_A_cut.jpg) so you can find them later. If you no longer feel that a topic is important, just remove the page marker. I picked up this habit from my husband, who startet doing it during his PhD. I found it very helpful during my masters and I continue to use it at work outside of academia. edit: When you run out of shelf space, throw out the oldest journal that doesn't have any page markers. Upvotes: 2 <issue_comment>username_6: I think most people here have said what I would say too, scan them. The rationale behind scanning the notes is that they then are not lost in case you do want to get back to them, however, as others may have indicated, you should consider keeping the notes that you deem important on paper as it makes them a lot easier and more comfortable to read. The other point people did not mention - when scanning, get a document scanner. Even a small document scanner will be a lot faster than a flatbed which would drive you insane. I have scanned many of my own notes - without my (Fujitsu) document scanner that would not have happened as not only is it faster than a flatbed, but it can do double sided scans too, effectively, put in a stack of papers, press a button and done. (Unless a problem occurs in which case the scan is interrupted and you are informed of the problem.) I would advise you to read reviews and compare different models before making a decision. Some people have mentioned OCR: This is only practical for already a priory typeset and printed material. I have not seen nor found affordable OCR software that can understand handwriting. It may exist as a research code or maybe as a more expensive code but this is of little use to the more average consumer, plus you cannot be sure it will read notes correctly if they contain mathematical notation or similar (this would also apply for prints though). Something I didn't do in my scans - but you should consider - is organising these in a coherent manner. Obviously use multi-page PDFs where content belongs together and possibly use folders/directories in the same way that you would use physical folders to help you find documents later on should you need to seek out any page. Upvotes: 2 <issue_comment>username_7: **Start the process of reviewing your notes and manually entering anything that you find useful electronically.** This is the only really viable way to get your handwritten notes into the computer. OCR is unlikely to help with handwriting, and scanning the notes as images just creates a "pile of papers" on your hard drive instead of on your desk. The information isn't any more accessible than it was before. I did say *start the process* of manual review, not complete the entire process. After doing this for a short time, you might find that the review is quite useful and important ideas are jumping out at you. If so, keep on going. But you might find that it is a big waste of time. In which case, just throw away the pile of papers. **Do try to go digital in the future.** Upvotes: 3 <issue_comment>username_8: I left academia some years ago. When I left university, my notes were all neatly stored in folders and then boxed up. I **never** looked at any of those notes again. Eventually, I just decided that they were pointless clutter and dumped them in the recycling. These days, if you need to know something, it's easier to look it up (in a book or on-line) than it is to wade through a pile of scribblings you did several years earlier. Upvotes: 3 <issue_comment>username_9: There have been some general remarks about scanning things in and digitization, but here is a system that I used when I was taking paper notes. I had a manilla envelope for each project that I was currently working on - I might have had 2-3 irons in the fire, so to speak. I kept the most recent material to the front of the folder. Eventually the project would wrap up (or I went and got a better job ;) At the end of the project, I would go through all the screenshots and annotations that I had made. I would have a lot of duplicates and a lot of information that was no longer relevant. This got discarded. Most of the other information I had already digitized (e.g. if I took [Sketchnotes](http://rohdesign.com/sketchnotes/) at a meeting, or added cases to our issue tracker). If I found any information that still needed to be digitized, I'd add it to a Wiki, issue tracker, or email, and then I'd move the folder to a filing cabinet or drawer. It seems like this (or a similar process) would work for you. Upvotes: 0 <issue_comment>username_10: In addition to other good answers... and as I intermittently sift through hand-written notes going back 35-40 years... : Some things will have been truly, clearly superceded. Recycle. Some things' disposition is less clear. One thing to do is to allocate some regular time to "reconsider/edit" such notes by typing them up (TeX, or whatever). And then you have an economical electronic (searchable!) copy. Then double-check that your typed version captures everything in the hand-written, and recycle. Another point is that often organization by date, not by purported "topic", may work best. Or at least having things set-up so that it's easy to search by date... since often I realize that temporal proximity is more relevant than what I thought at the time about causality. Upvotes: 1 <issue_comment>username_11: Recycle the lot, now, without grief. The effort of reading / indexing / typing would overwhelm you, and squash whatever creativity you might still have. The act of taking the notes might have helped, but that is long gone. If you need the facts again, consult google / wikipedia / mathworld / whatever. There is a narrow exception to this: keep unique records of (say) the medical histories of very unusual patients, or tasting notes of very rare wines, or whatever is the equivalent in your field. But if you aren’t Oliver Sacks, this won’t apply. In which case, the recycling calls. Upvotes: 0 <issue_comment>username_12: I have notes going back about 20 years. (Notes means daily logs, [like a daytimer] and all of course work as a college student.) I save everything, first because it was easy, and I liked having access to my info, though, like others, thought I'd really never need it. Then my second reason arose when I needed the info itself, or needed to show when certain concepts were originally documented by me. Make it easy on yourself. Take everything related to one project/study/class and put it in a pile. Don't sort it, don't make it all face the same way, don't fuss with it! Give this pile a name. "Trigger Study - Small Mammals" Take a ziplock bag and using a thick black permanent marker, write the name of the pile (*BIG*) just under the zipper. Put the whole pile into the ziplock bag, zip it and stand it up in any long term storage container of your choice. Walmart has clear Sterilite-brand tubs that work fine; clear helps you to see contents without manhandling it down off a shelf. I use 2 gallon freezer bags... they hold about 400 pages. You can either break down any monster sized pile into multiple bags, or order zippered bags any size you can imagine. I use U-Line, but there are others. Upvotes: -1 <issue_comment>username_13: I am used to writing notes on different supports: post-its, text files... taken at very different moments (bedside, bus). Although I have a pretty good memory (with respect to most of my colleagues), I am amazed by how "a new thought" had appareared several times before in my notes, sometimes in a slightly different form. Reading them again both refreshes and strenghtens those silent, unconscious cognitive processeses, described by many scientists, like J. Hadamard's [The Psychology of Invention in the Mathematical Field](http://www.inp.nsk.su/~silagadz/Hadamard_Essay.pdf), or <NAME>, that sometimes lead to those rare [eureka or aha! moments](https://en.wikipedia.org/wiki/Eureka_effect). I displine myself to always have something to take note (thoughts do not warn), to decorate my notes with a time and date, descriptions, mood, attitude and sights (was it rainy? what did it smell) to help me revive the moment when I read the note again, in a [synesthetic](https://en.wikipedia.org/wiki/Synesthesia) manner. And I do my best to reopen old text files, to rebrowse old post-its once in a while, randomly. I do not know if it is really effective, but I use it as a comforting ritual, for instance when I am not in mood to do real scientific work, and it does work as least like a placebo effect. At least, I know it worked with dreams: taking notes of dream details in the morning helped me remember them more accurately, and dream them again with even richer details. It helped me getting rid of recurring nightmmares, by somehow completing them progressively, like one finishes a level of a computer game. Upvotes: 1 <issue_comment>username_14: My strategy is to honestly assess the importance and use of the notes to myself and to others. ============================================================================================== ### As an undergrad, I kept my notes for some of my advanced physics, astronomy and math classes for grad school. Several of those notebooks were helpful, others were not. I kept my other notebooks for a year or two, but recycled them when I moved to grad school. After taking grad-level courses, I was able to condense and/or recycle my undergrad notes. Most of what I have left now are a couple essays or projects I'm really proud of, a few interesting "summary sheets", some thought-provoking handouts, and similar things -- maybe 5% of what I started with. (I lean slightly towards being a sentimental hoarder.) ### As a grad student, I kept the notebooks from several of my grad classes to review for my comprehensive exams and in case I'd ever need to teach a class on that topic someday. I also kept some things I was proud of from grad school and undergrad, as well as other things that were particularly interesting. I still occasionally refer to these, but it's becoming less and less often. ### As a grad researcher, I also kept a file with most of my dissertation research: key derivations (though most of these are in \*TeX on a Git repository), meeting notes, a few key papers with my notes, a few super-useful diagrams, and some other things. ### As an early-career scientist, I have 3–4 bushel boxes in my basement with 15–25 notebooks or binders from undergrad through grad school, plus a few key reference textbooks. I may use them someday to help me teach a class or help a student I'm mentoring, but I generally don't use them. They're "archived", pending an office. I really should have a place for research notes, but I don't. What I've done in the past is kept them in one of 2-3 modest piles and then gone through them all in a year or two. At those times, I have to be ruthless about what I'm really using. ### As an instructor, I sometimes make some notes to myself for a given lesson, but rarely keep these more than a day or two. I have to be super-organized with student papers, so I reserve certain drawers for different kinds of papers, and let those build up till the end of the semester. I also archive certain student papers (e.g. exams) for a certain university-mandated time (usually 1–5 years). ### How might you apply this? * **What is the nature of the notes?** + Are the notes inherently valuable or important (e.g. contracts, stock certificates)? + I keep key derivations for my most important work + I keep some articles I've found to be meaningful to me. + I keep some helpful summary sheets (e.g. list of trig identities or `vim` commands) * **What is the likelihood that you will use the notes?** + Know yourself. Be honest with yourself. Be ruthless when you need to be ruthless. * **Is there a chance that somebody else will benefit from your notes?** + I keep tax documents, but not shopping lists.... I periodically go through some of my papers and try to reduce the number of things by something like 90%. That seems to work for me. ### Personal Organization I use index cards in my pocket, sticky notes on my desk and apps on my computer (e.g. Outlook, Apple Reminders, or Google Calendar) to remind me of meetings, to-do lists, shopping lists, and so on. I generally dispose of these within 0–3 days of not needing them, so these don't really build up. Rule of Thumb ============= **Keep papers for a number of weeks equal to the amount of minutes it takes you to reproduce the document, or about a year of shelf-life per hour of work.** Obviously, adjust this rule of thumb based on things like the value of the notes, how much space you have, your reluctance to part with notes, the relevance of the notes, and so on. Upvotes: 0 <issue_comment>username_15: I'm surprised that none of these answers recommend the [Zettelkasten method](https://zettelkasten.de/posts/overview/) for knowledge management. This is an approach for creating a note repository, in which every note represents one piece of information. Notes can reference one another, and eventually a Zettelkasten (repository) will have enough content that one can use the Zettelkasten to discover new links between concepts. This can be a digital system (there are some apps designed for this specific approach to simple note-taking), or an analog system. The 'original' Zettelkasten was a system using index cards, developed by sociologist <NAME>. Here is a [blog post](https://zettelkasten.de/posts/create-zettel-from-reading-notes/) that discusses the situation in this specific question: starting with many old, unorganized hand-written notes, and distilling information into usable digital notes for your future self. I recently started reading about Zettelkasten methods a few months ago, and even with *very* imperfect implementation, I've been able to increase the usability of my older notes, and have been taking more effective notes in general. Upvotes: 0 <issue_comment>username_16: The old papers could be- 1. Scan and convert to PDF, which can then be managed with Zotero or any other reference management software. 2. The PDF could then be annotated with notes, and these could be exported to Markdown. 3. The markdown could be opened in a digital zettelkasten sytem like zettlr or obsidian to retain the notes and to make it resuable in future... Upvotes: 0
2015/12/22
3,758
14,255
<issue_start>username_0: Finding letter writing guidelines that simply say something like "avoid gender bias" is rather easy. Finding letter writing guidelines that explain what this means in more detail is extremely challenging. What do men and women tend to do differently when writing letters? What do letter writers tend to do when writing about men or women candidates? How can these biases be recognized and mitigated? [This study by Bell, Cole, and Floge](http://www.jstor.org/stable/27698612) shows that male letter writers write differently than female letter writers and that both write differently when recommending men for a position than when recommending women. But that's one study, published in 1992, and it is relatively hard to extract concrete advice on avoiding gender bias when writing recommendation letters from this paper. Is there other work in this area? Can it be distilled into something that's more directly applicable to letter writing? (Note that there is a [somewhat related question](https://academia.stackexchange.com/questions/29018/how-to-remove-gender-bias-from-an-academic-job-search) about avoiding gender bias when evaluating prospective applicants for an academic position.)<issue_comment>username_1: If you are looking to reduce gender bias in the letters of recommendation you write, perhaps you could try an experiment: pretend that several students that you know well, some male, some female, ask you for a reference. Write a letter for each one. Then black out the name and the pronouns and show them to a friend or colleague and see if the gender can be correctly guessed. Upvotes: 2 <issue_comment>username_2: When I write my letters, I try to consciously avoid the biases that have been reported in the literature. Namely, I try to avoid stressing the effort/emotion/affect of female candidates. For both genders, I try focus on research capability and competence. You may find these helpful: * <http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2572075/> * <http://www.academic.umn.edu/wfc/rec%20letter%20study%202009.pdf> * The graph below can be found in this paper: <https://www.ncwit.org/sites/default/files/resources/avoidingunintendedgenderbiaslettersrecommendation.pdf> * the original source is: Exploring the [Color of Glass: Letters of Recommendation for Female and Male Medical Faculty](http://diversity.berkeley.edu/sites/default/files/exploring-the-color-of-glass.pdf) doi: 10.1177/0957926503014002277 where the caption reads "Semantic realms following possessives. Rank-ordered within gender sets from equal numbers of letters ‘her training’; ‘his research’." [![enter image description here](https://i.stack.imgur.com/eofVj.png)](https://i.stack.imgur.com/eofVj.png) Upvotes: 4 <issue_comment>username_3: I am no expert on gender bias, but I want to make comments on this part of the question (which I still think should be split up): > > What do men and women tend to do differently when writing letters? > > > In the study mentioned in the question (which is noted has a small sample size and may be biased), the main observations in relation to this question are: * men and women were equally likely to discuss teaching, research and collegiality (all positively) * however, women were less likely to provide support for a positive assessment of research for female candidates * men wrote about intellect more often From my personal experience of reading letters for research mathematicians, I'm skeptical of the second point being valid in my field at present because all the non-teaching letters are almost entirely about research, though it's possible men write more about research than women in general. I haven't payed enough attention to the third point to have an opinion. Then the question asked is > > How can these biases be recognized and mitigated? > > > This maybe is more important for male vs female candidates than male vs female writers. Not that it's not still interesting, but everyone writes in their own way and there is a lot of variation, both among males and also among females. What's a more important question I think is * How do I write an effective letter for a candidate? See for instance: [this study](http://research.uvu.edu/albrecht-crane/4950/syllabus_files/Bruland.pdf), which cites the Bell, Cole and Floge paper and finds the biggest difference is in letters for successful candidates versus unsuccessful ones rather than looking at gender differences. Note: there seems to be a lot more research on the other side of the question about differences between letters *for* male versus female candidates. You can Google Scholar the study you mentioned and look at citing papers and related papers. See also username_2's answer. However, in my cursory search, I didn't notice any other research addressing male versus female writers. Upvotes: 2 <issue_comment>username_4: At <http://www.csw.arizona.edu/LORbias> you can see a nice one-page poster entitled "Avoiding Gender Bias in Reference Writing". Its suggestions generally echo points already mentioned in other answers, but it's a convenient and concise source. One common theme is to focus on the person's achievements rather than their effort, potential, or emotional or personal traits. Upvotes: 2 <issue_comment>username_5: First, here is an answer to "What do letter writers tend to do when writing about men or women candidates?" One difference I have seen mentioned in the literature but have not seen mentioned in the other answers: use of standout adjectives, words like "superb," "outstanding," and "excellent." Specifically, in letters for 62 female and 222 male applicants to a medical faculty at a large American medical school in the mid-1990s: > > We developed a list of ‘standout adjectives’, namely ‘excellent’, ‘superb’, ‘outstanding’, ‘unique’, ‘exceptional’, and ‘unparalleled’. > In tabulating the percentages of letters for female applicants and male > applicants that included these terms, we found them to be similar (63% for > women and 58% for men). And yet the letters for men read differently. Our gut reaction after reading many of these letters was that the men had been praised more highly with these terms than had the women. Thus we were led to consider frequency. That is, instead of coding mere occurrence of at least one of these terms in a letter, we coded for multiple occurrences. Here we found that the letters for women that had at least one of these terms had an average of 1.5 terms, whereas the letters for men that included at least one had an average of 2.0 such terms. That is, there was repetition of standout adjectives within men’s letters to a greater extent. > > > <NAME>., & <NAME>. (2003). [Exploring the color of glass: Letters of recommendation for female and male medical faculty](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.2206&rep=rep1&type=pdf). Discourse & Society, 14(2), 191-220. And in letters written for 235 male and 42 female applicants for either a chemistry or biochemistry faculty position at a large U.S. research university: > > In line with Hypothesis 5, results revealed a significant gender difference in how many standout adjectives (e.g. outstanding, unique, > and exceptional) the recommender used to describe the candidate, F(1, 278)=3.95, p=.05. Consistent with the notion that implicit biases can influence how letter writers describe female candidates, recommenders described male candidates (M=.70) with significantly more standout > adjectives compared to female candidates (M=.60). To address the possibility that this difference could be accounted for by differences in the qualifications of male and female candidates, we conducted an ANCOVA that included number of publications, presentations, fellowships, postdoctoral positions, and number letters of recommendation as covariates. Even after removing variance in standout language due to any and all of these variables, the gender difference remained significant, p=.04. There were no differences between departments in how many standout > adjectives candidates' letters included. > > > The "standout words" from the latter study were: excellen\*, superb, outstanding, unique, exceptional, unparalleled, \*est, most, wonderful, terrific\*, fabulous, magnificent, remarkable, extraordinar\*, amazing, supreme\*, unmatched <NAME>., <NAME>., & <NAME>. (2007). [A linguistic comparison of letters of recommendation for male and female chemistry and biochemistry job applicants](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2572075/). *Sex Roles*, 57(7-8), 509-514. The latter study also found, however, that "results revealed more similarities than differences in letters written for male and female candidates." A third study did not find a significant difference in use of standout adjectives by gender of the applicant. In a review of 763 letters for applicants to an otolaryngology/head and neck surgery (OHNS) residency program at Stanford University: > > The length of the letters was similar, as were the mean number of "standout" terms. > > > However, this study did address your other question, "What do men and women tend to do differently when writing letters?": > > All 763 letters "recommended" the applicant for OHNS residency. Ninety-one percent of letters were written by men, 68.4% by male otolaryngologists (OTOs), 4.2% by female OTOs, and 33% by OHNS department chairs or division chiefs (100% men). A comparison of female and male letter writers revealed five categories with significant differences: female letter writers were more likely to call an applicant a “team player” (P = .000), “compassionate,” (P = .001) and use strings of adjectives (P = .024). In contrast, they were less likely when compared with male letter writers, to mention an applicant's personal life (P = .003), or write “letters of minimal assurance” (P = .035). > > > In more detail: > > Male letter writers were more likely (28% of letters compared with 10% of letters by female letter writers) to comment on the candidate's personal life, sometimes discussing the candidate's family history (e.g., immigration from another country), major accomplishments (e.g., biking across the United States), or hobbies (e.g., fly fishing). Female letter writers tended to discuss the applicant in terms of his or her professional accomplishments only. They were also more likely to mention that the applicant was a “team player” and “compassionate.” Women were more likely to use a string of adjectives to describe the applicant. Some researchers assert that the use of stringing terms is used to substitute for more substantive language about academic characteristics of the candidate. This did not seem to be the case in this group of letters where, when correlated with the USMLE scores, “stringing of adjectives” was close to positively correlated with higher scores (P = .068). Female letter writers were less likely to write a letter of “minimal assurance.” > > > <NAME>., & <NAME>. (2008). [Letters of recommendation to an otolaryngology/head and neck surgery residency program: their function and the role of gender](http://onlinelibrary.wiley.com/doi/10.1097/MLG.0b013e318175337e/abstract). *The Laryngoscope*, 118(8), 1335-1344. A more subtle difference that may occur in letters of recommendation involves "sorting" students based on gender stereotypes - for example, speaking more favorably of female students' ability at family medicine: > > To gain some insight into this, we performed detailed text analysis of approximately 300 medical student performance evaluations (MSPEs) written for students applying to a competitive diagnostic radiology residency. Results showed subtle differences in the text of MSPEs related to the gender of the author and student suggesting that gender stereotypes and their accompanying expectations and assumptions contribute to the gendered socialization of medical students toward different specialties. For example, factor analysis of word categories in MSPEs found that family medicine, a communal specialty, was positively associated with standout adjectives (eg, excellent, exceptional) only in MSPEs written about female students by female authors. By comparison, male authors rarely mentioned family medicine in writing about male students. In text from female authors writing about male students, family medicine negatively correlated with words indicating ability and insight. These results suggest that, however unintentionally, stereotype-based assumptions that women are communal and men are agentic may lead evaluators to see women as a better fit for communal specialties such as family medicine. Close examination of the text supports this as indicated by the surprise when a male student excelled in family medicine noted by this female author: “[He] really surprised us! [He] is an exceptional student [in family medicine].” The text from another female author appears to express relief that a male student who excelled in the communal setting of family medicine also performed well in the agentic setting of surgery: “Although [he] received highest honors on [his] family medicine rotation, surely [his] finest performance was on surgery… [where he] was outstanding — spoke with families, got consent forms signed, was extremely aggressive…” It is possible that the absence of “family medicine” in text from male authors writing about male students also results from gender alignment (ie, no mention of this communal specialty in letters from agentic authors for agentic students). > > > Quote from <NAME>., <NAME>., <NAME>., & <NAME>. (2015). [Why is John More Likely to Become Department <NAME>?](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4530686/). *Transactions of the American Clinical and Climatological Association*, 126, 197. But the study it's from is: <NAME>., <NAME>., <NAME>., & <NAME>. (2011). [Do students’ and authors’ genders affect evaluations? a linguistic analysis of Medical Student Performance Evaluations](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3321359/). *Academic Medicine*, 86(1), 59. Upvotes: 4 [selected_answer]
2015/12/22
1,195
5,272
<issue_start>username_0: Grants are currently given via an application process. Grants could be given via a more efficient process: A single website - much like the patent database, filled with brief research proposals, categorized just like the patent databases, possibly restricted to one or several proposals per researcher to keep things lean. This database would be so organized that any grant giving agency could merely select which areas of research it wishes to fund and the browse the latest ideas in that field. A large strength of this model is that researchers would not be bothered with a detailed proposal until the grant agency already established interest in the idea - this removing the waste associated with an obscene number of rejected proposals. There are many small details that would have to be ironed out, but why is this system not used?<issue_comment>username_1: I don't understand your concept of databases, but there are websites and systems for grant and scholarship/fellowship search! I don't know if it is allowed to put names here? I think in the USA you have similar systems. All websites I know are for Europe grants, but I also know one huge database (not free) that is available all over world. Upvotes: 0 <issue_comment>username_2: First, let us consider why there are many organizations that fund research, rather than a single research-funding organization. This is a matter of evolutionary organizational structure. In most countries, research has a non-trivial budget and applies to many different concerns of government. That means there has to be some (probably largely hierarchical) structure for organizing it. Now, let's consider two prototypical organizational structures for government-funded research. First, we might have a general research agency, which contains subdivisions addressing the research needs of various other governmental tasks: [![Organization with one research funder](https://i.stack.imgur.com/mKwQp.png)](https://i.stack.imgur.com/mKwQp.png) Alternatively, each government department might have its own research agency: [![Organization with many research funders](https://i.stack.imgur.com/bUZAv.png)](https://i.stack.imgur.com/bUZAv.png) Almost everywhere, we see organizations more like the second structure than the first---there might well be some countries in the world where research is so small or so controlled that is it organized in the first way, but if so, I am not aware of them. Why might that be? Consider what happens if you are a leader in the department of agriculture, and you want to expand your agency's research work. Unless strong regulation prevents you from doing so, it's much easier to create or expand a research organization within the agriculture department than it is to get an independent research department to do it for you. A research sub-department within agriculture is also more likely to serve the peculiar needs, time scale, market structure, etc. as relates to agriculture. It's also easier and more rewarding to go to government leadership and fight to get resources for your own organization, where you can explain exactly how you plan to utilize them, than to fight to give them to somebody else. Since both government structure and research needs evolve over time, we may thus expect research organizations to multiply, both across the government as a whole and also within individual sub-organizations. They *are* in fact occasionally reorganized and combined with the goal of making them simpler and more efficient to interact with, just as other government agencies are, but that will typically not reduce the number down to one, just to a smaller "many." Moreover, we've only discussed government funding, not industry funding or funding by foundations and NGOs, which all have their own separate needs and desires and further complicate the funding landscape. Now, to the second aspect of the question: why is there no central database for applications? Sometimes there are, at least partially. For example, in the United States all government requests for proposals go through [FedBizOpps](https://www.fbo.gov/). Most research solicitations can thus be found there (though not all, due to the diversity of mechanisms), along with requests for things like [security guards for the US Embassy in Costa Rica](https://www.fbo.gov/?s=opportunity&mode=form&id=f75babe28fc1ee1b5cc92a5d364dafef&tab=core&_cview=1). As you might guess, however, the sheer breadth means this often isn't a terribly efficient method of searching. Likewise, every agency has different sorts of information it's looking for in research proposals. Again, taking the US as an example, the NSF really wants to know how its funds will support graduate student and postdoc education, since that's a key part of its mandate. AFRL, on the other hand, usually doesn't care much about supporting students, and has a mandate instead focusing on how its funds will affect current military concerns. As a result, a "universal" proposal would likely be quite cumbersome even if the bureaucracies were somehow reconciled. Bottom line: "research" is too complex and pervasive a set of needs to readily stay contained within a single unified organization. Upvotes: 4 [selected_answer]
2015/12/22
407
1,345
<issue_start>username_0: In an academic paper, when I refer to an article by author's name and publication year, then use a relative pronoun, should I use 'who' or 'which?' For example, > > We referred to the method proposed by Smith and Johnson (2004) who suggested blah-blah. > > ><issue_comment>username_1: In your example, you'll need *who*, as you wrote: > > We referred to the method proposed by Smith and Johnson (2004) who suggested blah-blah. > > > Here's an example with *which*: > > We compared our results with those of the method proposed by Smith and Johnson (2004), which is similar to our method except in how xxx is handled. > > > Upvotes: 0 <issue_comment>username_2: This depends entirely on the subject of your subordinate (relative) clause. If the authors are the subjects of the subordinate clause, use *who*. E.g. > > Many postmodernists reacted angrily to the stunt pulled by Sokal (1996), *who* they felt had acted unethically in an effort to discredit their field. > > > If the paper or text is the subject of the subordinate clause, use *which*. E.g. > > If you want to improve your mathematical writing, I recommend carefully examining Halmos (1974) *which* as a text manages the difficult feat of combining scrupulous rigorous with broad accessibility. > > > Upvotes: 4 [selected_answer]
2015/12/22
1,461
6,266
<issue_start>username_0: I am a graduate student of university A, and I'm going to do an internship in a lab in university B. I used an internship contract from my originating university A, which stipulates that university B can make a contract on a case-by-case basis to later exploit my work made during my internship. However, university B asks me to change this clause to **transfer all intellectual property rights to university B exclusively**, meaning that I won't be able to use it aftewards without approval from university B. Note that I am not paid by university B, nor having any other compensation for the whole duration of my internship there from university B, and I'm not hosted on-campus either, but I will get academic credits from university A. Is this a common practice for lab internships? Can I do something about it? (I don't mind giving all the rights to exploit my work, but I would also like to be able to use it myself, particularly softwares I will develop). /Final update: Thank you all for your great feedback. The issue was resolved quite simply by agreeing with my supervisor to put the developped softwares under an opensource licence. The contract thus won't have any impact whatsoever.<issue_comment>username_1: If this is common practice, it should not be. It is exploitative. If they will not negotiate down to something reasonable -- and they might; that contract has the air of being written by lawyers who did not explain it clearly to the lab -- do not take the internship. You can do better. Upvotes: 1 <issue_comment>username_2: It is common for universities (at least in Europe) to have you sign-off the rights of your work **when you are being paid by them**. At least I had to do that as a PhD and a postdoc. In your case, you will be working in a hi-end lab and they'll be training you, probably **give you access to existing code and data**, and not charge you for using their equipment. If there is already a large amount of code at the lab and you develop an extension to it (something that frequently happens in my lab) should you be allowed to use the entire platform later or not? I would say that at this point of your academic career, **you don't really have a lot of leverage**, except refusing to sign and finding another lab. My advise is to sign the contract and go to the lab. Learn everything you can and do your best job. Once you leave the lab, ask the professor if you could keep using the code for your future projects, while citing his lab or some paper that you'll jointly publish. **Most people will say yes.** If they refuse you to use your code, it doesn't stop you from using the ideas behind it as long as they have been published (article, thesis, etc.). In the past when dealing with unreasonable people, I rewrote a mathematical library that I developed to begin with, based only on the published papers. I even switched programming language and made many improvements the second time around. Of course, it was a wasted week... Upvotes: 2 <issue_comment>username_3: > > Can I do something about it? > > > There aren't any magical answers here, but aside from the obvious options of either signing the contract and accepting that this is a price you'll have to pay for a really good professional development opportunity, or refusing to sign the contract under any conditions and seeking an alternative internship at a more accommodating institution, I see a third option: **negotiate.** There are many possible ways to go about this, but here's one: I assume you were accepted to the internship by a PI/researcher who thought that you had good qualifications and that his/her lab would benefit from your work and talent. This person could be your ally. What I would do is write them a polite email along the following lines: > > Dear Professor Smith, > > > Thank you for offering me the internship at your fusion reactor lab. I am excited about this wonderful opportunity and am looking forward to starting in a couple of months, and have even begun doing some background reading on flux capacitor technology to make sure I can be as productive and helpful as possible from day one. I am writing however to express a concern about an issue that came up and that may prevent me from taking on the internship. I was informed by your Office of Research that your university is refusing to accept the standard internship contract my own university advised me to use (see the email I received from them, appended below) and are asking me to agree to a change in the standard intellectual property clause, which in its present form is designed to protect the interests of myself, yourself and both our respective universities, to an alternative version that transfers all IP rights to your institution. I am afraid I don't see this as an acceptable or balanced arrangement from either a moral or practical point of view. I am happy to assign any rights that would allow your lab to exploit and make use of any work I do while at your lab as in my proposed contract, but since among other activities I will be developing code that I may want to use in the future for my thesis research or other legitimate purposes, I do not think it is reasonable to accept the terms offered by your university. > > > For this reason, I am asking for your help in resolving this situation. If we cannot reach an agreement, sadly I may be forced to withdraw my acceptance of your internship. > > > Best wishes, > > > [your name] > > > The thing to remember is that while you are negotiating with a counterparty that's massively more powerful than you (which is why they feel they can get away with making such demands), you do have a bit of leverage: *they want something from you*. Whether you will succeed depends on the personalities of the people involved, how badly the PI wants you, how many other talented students are lining up to take your place, how unique are the skills that you have to offer, how rigid the university's bureaucratic machinery is, etc. If the PI is unwilling or unable to help, you will find yourself back where you started, so you'll still have the option of either accepting the terms you're offered, or giving up the internship. Upvotes: 4 [selected_answer]
2015/12/23
640
2,842
<issue_start>username_0: I am about to graduate with a B.A. in Math and want to pursue a Ph.D. in the subject. I understand that most graduate schools require both the general GRE as well as the subject math test. I am wondering exactly how important these exams are for admissions. Obviously it varies by school, so I was hoping people could compare them to undergraduate admissions with the ACT or SAT. For example, as a high school student we were told that SAT / ACT scores were an important but not determining factor in admissions. Accordingly, I stressed quite a bit about my scores and even retook the ACT. For a graduate program, especially in mathematics, is the general GRE really that important? Or is it more of a check mark — that is, to make sure candidates are able to read and write, but not necessarily to pick those with the highest scores. Similarly, are the subject tests decisive at all? Or are they used to just make sure that you know the "basics," while the course load and letters of recommendation are far more important.<issue_comment>username_1: The SAT/ACT are there really to quantify if your abilities in the subjects tested are to-par with what you'll need to succeed generally in college. It also gives the college a means to compare individuals at a national level. The GRE itself is technically supposed to measure how your undergraduate studies have progressed your academic ability as a preparation for graduate school, but institutions vary *widely* in their consideration of GRE scores in their admissions process. It could just be a check mark, but it could also be integral to the selection process. Many graduate programs at my institution require nothing but the general GRE score to exceed 1000 in total, and selections are influenced much more by letters of recommendation, GPA, statement of purpose and prior research experience. Of course, a particularly low GRE score could have a negative effect on your application, much like a low SAT/ACT can impact college admissions. If they want your performance on the GRE subject test in math for a math Ph.D., it may be reasonable that they're going to scrutinize that score more than the general exam. But it's really up to the graduate admissions staff who oversee the application. Upvotes: 2 [selected_answer]<issue_comment>username_2: Certain programs in Mathematics will heavily weight the GRE Subject test for mathematics programs. Many top programs will have a threshold that they expect students to pass for them to actually read the application. Usually, one should view the subject test more as something that can help you be considered, not something that can get you accepted to a program. This being said, it is definitely something that should not be considered just as a formality when applying to top 25 universities. Upvotes: 3
2015/12/23
1,761
7,795
<issue_start>username_0: My friend has an early draft of a math result he would like to publish. In telling me about his results, I made an observation that significantly simplified a portion of the proof. He wanted to list me on the final publication (not as a co-author, but more of a shout out for contributing), which I was happy with. I don't plan on going into academia so publications aren't essential to my career. However, as I've thought more about the results, I've seen more and more places where the proof could be simplified using alternate methods, to the point where the proof can be reduced to a fraction of its original size and bears little resemblance to the original. My friend doesn't know about the new results yet and I worry it would seem like all the work he had done before was unnecessary. But his work was certainly influential as it was easier to make the observations given that I knew what the result should be. I certainly wouldn't want to withhold my findings, so I suppose my question boils down to how should I handle telling him?<issue_comment>username_1: Tell him what you have figured out. Maybe he will want to modify his own proof and add you as an author. Maybe he will just add a note that it has been pointed out to him that the result can also be proven using other specified methods. Maybe he will look over your shorter proof with interest but not change his manuscript at all. Understanding how a result can be proved is obviously of key importance in mathematics, but do not forget that result that he has proved ought to have some interest unto itself. It can be more important to know that something is true than to have the most elegant proof of its truth, and realizing that a result is provable and interesting is a significant accomplishment on its own, even if a "better" proof can be found later. Upvotes: 3 <issue_comment>username_2: **TLDR:** Well, looks like I wrote a pretty long answer! I don't really see how I can summarize it in a single sentence, sorry... You'll just have to read the whole thing. :-) --- You are worried about being the bearer of bad news and upsetting your friend, which is very noble, but it's not clear to me that that's in fact the right way to view the situation. From your description it sounds like both you and your friend made notable and unique contributions: he discovered the result and was the first to prove it, and you found a much simpler proof (something that's often not hard to do when you know what you're trying to prove but aren't encumbered by the baggage of all the dead ends the original discoverer of the result had to travel to get to his complicated proof). Your achievement does not necessarily devalue his, and might actually *increase* its value, in which case he should be *happy* to hear the news. Let's examine the possibilities for how your discovery can affect your friend: 1. Maybe your new proof is so simple that it completely trivializes the result, making it unpublishable or at best a lemma that can only be published as part of a larger paper with additional results. Well, in that case your friend would be right to be disappointed that the result he thought he so cleverly proved turned out to be trivial. This is the only possibility I can think of where your discovery would really be bad news. 2. Alternatively, your new proof might be such that it makes the result seem even cooler, since while the result was and remains publishable either way, the original proof was clumsy and inelegant, which would have made for a long and uninteresting paper that not many people would enjoy reading, whereas the new proof opens up a new perspective or adds a new kind of argument that makes the result itself seem more interesting and would make for a much more interesting and elegant paper. In that case your friend should be happy, except that: 3. Your friend might worry that your discovery would entitle you to be a coauthor of the paper. Perhaps he was looking forward to publishing his own solely authored paper, where he would completely "own" everything. Well, it makes sense that the idea of having to add you as a coauthor might come as a bit of a shock to him and take some getting used to. However, if he were more experienced he would realize that having a coauthor has many advantages; **the value of having written a paper with a coauthor is not half of the value of a solely authored paper** -- it is more than that, since the combined value of both coauthors' contributions often makes for a paper that's worth much more than the sum of its parts. This could very well be the case here, as I noted above. Taking this into account, in this scenario your friend should still be happy, although it might not be obvious to him that that's the case, and it's an interesting question how to make him see that. If he were to consult with his advisor, I'm sure that would help. 4. Finally, there is the delicate matter of your friend's ego. It may be that while from a professional point of view the news you would deliver him about the simplification of his proof is good, it would still hurt his ego and make him feel like he is stupid or inadequate for not having found the simplified proof himself. This is another example where having some experience can help soften the blow. The truth is, in math each of us has some unique skills and abilities, and one person's skills and abilities often complement those of another person such that the second person is able to see things that the first person didn't, even if they've been thinking about the problem for much longer. This works both ways, and many of us have had experiences where others have found simplifications of our proofs or points of view, and conversely we have done the same to others. So, over time we learn that being outdone by someone doesn't mean we're stupid and the other person is a genius, or even more clever than we are. Another way of saying this is that comparing mathematicians by mathematical ability isn't a linear order relation: you cannot order all mathematicians in a line such that if A stands to the right of B then A is a better mathematician than B. I hope your friend will be able to see that. (It may be a bad idea for you to try to explain it to him yourself though; that might come across as condescending.) Now, to conclude let me talk about the authorship of the paper, since I keep referring to you as a coauthor. The reason is that it seems clear to me that with the simplification you found you have earned authorship of the paper. Again, it's very noble of you to dismiss this as unnecessary, but you won't actually make your friend feel better by giving up authorship, since if you do that he would only feel like a fraud who writes a paper with someone else's proof. A joint paper would give the credit where it belongs: with both of you, each having made an important contribution without which the paper could not exist in the form it does -- this is something that it's crucial that your friend should understand if he is to feel good about the whole story. Second, although it's possible that you are an ego-less, yoda-like person who doesn't have trivial human desires like wanting to get credit for some cool math you did, I would dare speculate that you not only deserve such credit but will actually feel good about being made a coauthor of the paper. The fact that you don't need it for your professional success is both irrelevant, and, in my opinion, false (even if you don't work in academia, people are *very impressed* by things like authorship of math papers; trust me). So my suggestion is, let's save this kind of modesty for those 95% of situations in life where it is warranted and helpful. Upvotes: 4 [selected_answer]
2015/12/23
926
4,157
<issue_start>username_0: I found three phd programs in one school. The programs and areas of research are close to each other and also close to my interests and I am interested in studying in any of the programs. I contacted some faculty members and they encouraged me to apply.These programs have shared faculty members. In addition, applying for more than one program does not have additional fees. I applied for programs a and b. I am wondering to apply for c phd program or not. Does applying for three programs have a negative effect on my admission chances? Applying for three programs means writing three SOPs to be read by a small group of professors.<issue_comment>username_1: When the programs you are planning to apply to have overlaps in the faculty belonging to them, it's not necessarily helpful to apply to multiple programs simultaneously. This suggests that you either don't really know what you're interested in or are trying to maximize your chances by applying to multiple programs and hoping that the admissions committees don't notice. However, if different faculty encouraged you to apply to each of the three programs, that's something else altogether. Then you might consider mentioning which faculty encouraged you to apply to specific programs in your letters of purpose for each program. But otherwise, I'd apply to your preferred program. Upvotes: 1 <issue_comment>username_2: There are several reasons why you might WANT to apply for all three programs: * You have not already contacted faculty to be your mentor. * The GPA, GRE, etc requirements are radically different * Faculty overlap is minimal If you are interested in a particular faculty (and the requirements of the programs are the same), I would contact that faculty member and start the conversation. I would ask that faculty member if they suggest one program over another (they may know more about the funding situation, number of applicants this year, etc). Upvotes: 0 <issue_comment>username_3: If the different programs are in the same department, or the institution is small, applying to many programs will not reflect well on your prospects to be a successful PhD student. Doctoral study often involves intense, sustained focus on a single study; applying to three programs may be seen as an indication that you lack that focus! It depends on how graduate applications are handled at the institution you are applying to. Does the graduate school/graduate Dean look at applicants before the programs do? Does a department with multiple PhD programs look at applicants at the department or program level? If grad applicants are reviewed at the level of the College then surely it will be noticed that you're taking a "shotgun" approach to applications, which may be interpreted negatively. It all depends on how many, and how strong, the firewalls between programs are. At a big institution you will probably be OK. At a small one, no way! For what it is worth, at my institution (a moderate sized, 14,000 student second tier state institution with doctoral programs) we would probably notice a student applying to 3 doctoral programs. Upvotes: 0 <issue_comment>username_4: This answer is coming very late, but may be useful to people who land up on this thread. If all the three departments have faculty and research which are aligned with your research goals, there is no harm in applying to all 3 programs. The graduate committee is made of professors who are very well aware of the fact that almost all phd applicants apply to multiple phd programmes, the difference here is that they know where 2 other applications of yours went. They may ask you to set a priority, that's all. Showing interest is different from receiving an offer, professors may be interested in multiple candidates, but due to constraints give out offers to a smaller pool. They should not hold anything against you. If they do, you are better off not joining that programme. I applied to 3 phd programmes in the same university, and got selected in my second choice. All 3 departments knew that I had applied to the other 2 as well. They had no issues with it. Upvotes: 2
2015/12/23
1,099
4,602
<issue_start>username_0: It is holiday season, so I thought of re-considering my ways of holiday planning. As a student, holidays in academia are somewhat shaped by the semester breaks and the lecture-free period. Working in academia things are slightly different. One still has to be available during the lecture times, also account for the deadlines and additional tasks. Each time I want to plan some holiday, I remember my yearly tasks, and then I see that I don't have time to relax. Work seems to never end, I have to stay there in the office and prepare the exercises, work on the project, write paper etc. etc. There is clearly something wrong with this, and I want to change it. But, I don't know how to distribute my holidays throughout the year, because I've been used to working all time. How often should I take holidays? I know this question is somewhere between workplace and academia. And I know it also may provoke many opinion-based answers. But, I would like to know the different approaches from the more experienced people.<issue_comment>username_1: I tend to combine my holidays with academic travel. In part, because it reduces the cost of going to conferences: I don't have to fly half way across the country just for two days. In part, it also reduces the monetary cost of holiday travel. For example, adding 2 days for sightseeing/relaxing after a conference doesn't impact flight reimbursement. You pay for the hotel/airbnb, but that's it. That is especially appealing if you are going to a conference in Europe [US] and you're based in the US [Europe], such that the flight is a substantial part of the cost. I've also gotten flights to other cities reimbursed. As long as the flight X -> Y -> Z is not more expensive than X -> Y -> X, neither my university nor others that have paid for my travel have had any objections. It almost always costs them less (and never more), so it's a win-win situation. In that case, I still end up paying Z -> X, but that's half the fare I'd otherwise pay. If the flight to Z costs more than to X, I let them know and claim less than the actual cost for reimbursement. Another advantage of this is that it spreads out vacation days throughout the year. I never feel guilty taking off a day here and there and I'm substantially more productive afterward. Doing nothing for two weeks doesn't particularly appeal to me and would likely just stress me out once I got back. Upvotes: 5 [selected_answer]<issue_comment>username_2: I try to take all of my holiday allowance, usually spread through the year in a few blocks of 1-2 weeks (usually clustered around June-September), and then odd days here or there. To expand a little, I have noticed that productivity is not directly proportional to hours spent in the office. In addition to the obvious benefits to family life, taking holiday means I am *much* more productive when I return to work and I am happy to rigorously defend my right to take holiday if it is ever questioned (which it hasn't been, to date). I book the holiday well in advance and make sure it doesn't clash with any prior commitments, then defend my calendar. When I'm on leave I remove my work email from my phone and set an out of office reply. Upvotes: 3 <issue_comment>username_3: Holidays have two uses for those of us who are based outside of their home countries. The first is to get to know the country where are based and its surrounding countries better. I usually take some days off when I can combine long weekends with them so that I can travel around. These short trips usually don't disrupt my work routine and provide well needed time away from my office so that I can relax and be more productive when I'm back. My field is social science and I could technically carry my work around, but I deliberately avoid doing so. Even so, many eureka moments happen when I'm out hiking during a long weekend. I try to have these long weekends at least once every two months. A second use for holidays is to visit home and see parents and relatives. These are very disruptive, as they involve intercontinental flights and being away from my work computer for a couple of weeks. I tend to time these trips with Christmas and New Year, which are times of the year when nothing works in most Western universities anyways. These trips happen only once a year, and I try to use them to set a self imposed deadline for my research activities. Having to reach some key milestones in the project before leaving for these long holidays forces me to keep up a nice rhythm in the months leading to them. Upvotes: 0
2015/12/23
368
1,529
<issue_start>username_0: I'm preparing my resume for internship application. I have already included courses that I emphasized more on (reading extra advanced material outside of the course's material scope). I read lots of books. Should I include topics that I self-studied to my resume as extracurricular knowledge?<issue_comment>username_1: I am sure that there is a very wide range of opinion/reaction to what some people call "self-study". For me (grad admissions in mathematics in the U.S.), evidence of interest in reading things beyond what coursework demands is an extremely positive and unusual indicator. That is, so far as people seem to let on, no one reads any math book that's not a textbook for a course they've registered for, does not read ahead, and so on. I don't understand it, although I've seen it for decades. Yes, it is not as easy to "document" reading things as it is to document "grades in courses"... but my own opinion is that even excellent grades in courses form only a limited positive, since it refers to compliance, obedience, etc., even if at a challending level/rate/whatever. That is, for me, mathematics is not merely a school subject, nor merely a thing that one might get research funding for... :) Upvotes: 4 [selected_answer]<issue_comment>username_2: Although certified studies would be given a higher weight, it should be fine to add your self-taught knowledge in a résumé as long as you are able to back it up with any related work or your speech during an interview. Upvotes: 1
2015/12/23
871
3,691
<issue_start>username_0: **Some intro :** I am a student of BTech Aeronautical Engineering from India. And I am going to publish something in a scientific journal. **Dilemma :** My research is about CFD (Computational Fluid Dynamics) and has been, mainly, taken into account by simulation, as developing a real model will be infeasible for me, in view of financial issues. So, is it possible to publish the results only, with the data of the CFD analysis result backing up my thesis?<issue_comment>username_1: Possibly Yes ------------ If you are using an [already validated CFD code](http://www.grc.nasa.gov/WWW/wind/valid/tutorial/overview.html), then any unique or interesting results that you find would be candidate material for an article. If you are writing your own CFD code, then you must validate/verify it before you publish your code's results. [In code validation,] > > The overall objective is to demonstrate the accuracy of CFD codes so > that they may be used with confidence for aerodynamic simulation and > that the results be considered credible for decision making in design. > > ... > > Credibility is obtained by > demonstrating acceptable levels of uncertainty and error. A discussion > of the uncertainties and errors in CFD simulations is provided on the > page entitled Uncertainty and Error in CFD Simulations. The levels of > uncertainties and errors are determined through verification > assessment and validation assessment. > > > There are several methods of validation but the two widely accepted methods are: 1. Simulate a configuration that you can test, then test that configuration. If your code's results matches the test results, that's generally considered a good validation. 2. Simulate a configuration that you can model with an already validated CFD code. If their results agree, that's also considered a good validation. Of course you still need to perform a grid sizing analysis and other tests to ensure/prove you're using your code correctly. Upvotes: 4 [selected_answer]<issue_comment>username_2: In principle yes! There are numerous publications that merely use commercial CFD packages. What makes your commercial CFD solver simulations publishable? * **Validation** of your results (completely or partially) by some other scholar's experimental results. * A hot topic that **has not been touched by CFD community before**. A colleague of mine just published his pure commercial CFD results in a very prestigious journal. * Your **adviser reputation** and network. Some Professors are well known in their own field. Naturally, any publications coming with their name on it bears a message: Most probably the results make sense and contribute to the field. * Never underestimate the power of a good CFD simulation. So many scientist are not aware of what is going on in industry. There are **R&D people in industry** who are looking for similar CFD simulations and respect its practical value. Upvotes: 1 <issue_comment>username_3: Certainly, simulation results in CFD can be published. When evaluating a paper showing model results, I consider: * What does it tell us that is new? Is it giving new insight into the system being modelled, and is this insight generalisable to a category of simular systems? Is it giving us new insight into how a process works? Or large-scale emergent patterns from detailed processes? Is it telling us more about the accuracy and limitations of the model? Or is it presenting a new approach to modelling or model evaluation? * Will the results be useful to other people interested in other systems? * How effectively has the model been evaluated and assessed? Upvotes: 0
2015/12/24
3,343
14,575
<issue_start>username_0: I am reviewing a paper for possible publication in a respected journal. The English in the paper is very poor. The authors are clearly not native English speakers. I want to write something to the effect of the following in my review. > > I advise the authors to find a native English speaker to proofread the manuscript. > My question: **Is this appropriate in a review?** On one hand, I think it is good, constructive advice. The paper would be significantly clearer if someone spent a few hours helping them fix it up. I can try to help them through reviewer comments, but it would be much easier if someone could help them in person. The authors are located in a western English-speaking country, so they should be able to find someone. On the other hand, I don't want to be "the mean reviewer." I understand that English can be difficult to master for immigrants. Perhaps there is a more diplomatic way of saying this.<issue_comment>username_1: It's obviously appropriate and constructive to comment on writing issues that significantly affect the quality of the paper. The only question is how to do it as constructively as possible. Personally, I prefer to suggest that "the authors get editing help from someone with [full professional proficiency](https://english.stackexchange.com/a/105762) in English" rather than asking for "a native English speaker." I see other reviewers write the latter, so it's not uncommon, but I feel like it sounds a little bit like I'm "punishing" the authors for not having been born in an English-speaking country. There are plenty of academics who are not native English speakers, and don't have the same proficiency as native English speakers, but are still perfectly capable of high-quality academic writing. Upvotes: 7 <issue_comment>username_2: I think this sort of review comment is pretty common, but I dislike it. The purpose of the review is to advise the editor about the quality of the manuscript. Advising the authors is beyond the scope of the review. For most journals, corrections to spelling/grammar are (in theory) provided by a copy editor. For expensive subscription journals, authors should be allowed to use this service. There are also three ways this sort of comment can make the reviewer look bad: 1. The paper was written by a bunch of native speakers who have funny names. Yes, I have seen this happen. 2. The review has terrible grammar. This happens > 30% of the time. 3. An implication of bias, which can be avoided by replacing "native English speaker" with "expert editor" or similar. Upvotes: 0 <issue_comment>username_3: I think the answer somewhat depends on what kind and intensity of problems we are talking about. First, however, some thoughts on who else could deal with this: * **The editor and the initial quality check** (if one exists) have apparently judged the quality of English to be sufficient for you to review the paper. Thus, you have to expect them not to address this issue by themselves. * **Copy editors** can and hopefully will fix many issues, but as they are not experts of the subject, they can only do so much (also, some publishers have lousy copy editing). For example, I had copy editors miscorrect *a triangulation* to *the triangulation,* as they did not know that the triangulation was not unique in the context. In another example, I know cases where non-native speakers coined new technical terms making suboptimal use of English that however becomes only apparent to somebody who understands the concept described by the term. Also, copy editors only act after the paper is accepted, which does not help if the quality of English is so bad that it’s impeding the review. Regarding phrasing your advice, I prefer to avoid explicit suggestions what to do, but rather state what what problems you had and what has to change. You do not know the circumstances under which the manuscript was created and what ways to improve the language are available to the authors. For example, it may be that one of the authors has a good command of English but wasn’t strongly involved in writing the manuscript. Make implicit hints strong rather than subtle though, as the authors may fail to notice the latter, given their English skills. In addition to [username_1’s arguments against explicitly recommending native speakers](https://academia.stackexchange.com/a/60577/7734), I think that recommending a native speaker ignores the importance of understanding the subject matter. Somebody with decent English skills¹ who understands the manuscript is often much more valuable than a professional proofreader who is not from the field. To get some idea how to phrase the advice, I suggest asking yourself the following questions: * **Did you fail to understand significant portions of the paper due to bad English?** – If yes, you should definitely indicate this, as at least in the next round of review, you need to understand the whole paper. I suggest (and consider it appropriate) to write something along the lines of the following: > > Unfortunately, due to shortcomings in the language of the manuscript, I could not fully assess its quality. > > > This is an honest statement of facts that does not explicitly tell anybody what to do, but strongly implies that something needs to be done. Moreover, you are implicitly saying that this is not about the scientific quality of the manuscript and that you would like to assess its quality if only you could. * **Is the quality of English consistently bad?** – I often see manuscripts where you can clearly tell that certain passages were written by different authors. If some of the passages are good, this suggests that their author has a sufficient command of English to revise the rest of the manuscript. As an author, this person is likely best suited for the job. However, it could also be that the authors had only some passages proofread for whatever reason. Thus an explicit suggestion may be inaccurate or confusing and I would suggest pointing out which passages are problematic and praising the others. This should make it sufficiently clear what to do: > > While Sections 1 to 3 were well written, I found it difficult to understand the English of Section 4. > > > * **Are there some kinds of mistakes that occur particularly often?** – If yes, point them out. For example, some authors tend to use multiple compounds wherever possible without properly hyphenating them, or mix up definite and indefinite articles. (In both these cases, correcting them often requires a deep understanding of the subject matter and thus cannot be done by a copy editor.) * **Did the review take significantly longer due to bad English?** – If yes, and you expect to have another round of review, you can save some time by remarking on the quality of English now, which I consider appropriate as you are volunteering to review after all. For example you could say: > > Due to language mistakes, the manuscript was difficult to read. > > > If not, and if nothing else is wrong with the language that cannot be addressed on a per-sentence basis, a comment to the editor or a negative rating of the language quality in the editorial system (if it asks you for this) may suffice. --- ¹ preferably with a native language that is different from the authors’ one Upvotes: 5 <issue_comment>username_4: It is not only the case that you *can* advise the authors to work on improving the level of English of their paper (including getting the paper proofread by others), but I'd argue that you *should* do so (or even go beyond advising to *requiring* that they have the paper proofread, or even beyond that to outright rejecting the paper on the grounds of being not comprehensible enough) if you feel that this is a serious enough of an issue. Generally speaking, it is your duty as a reviewer to point out and criticize any flaws in the paper that make it less valuable for the journal and its readers. This extends to all aspects of the paper, from the science, to the clarity of exposition, technical correctness, and the language. In particular, in cases where a major flaw in the paper's ability to communicate otherwise good science to the readers can be removed through the relatively small effort of having the paper proofread by a person with good English proficiency, it seems absolutely appropriate and advisable to advise, ask, or even require the authors to do that. On the other hand, as username_1 pointed out in her excellent answer, asking for the proofreader to be a native English speaker is an illogical requirement. What matters is that the proofreader should have a high level of proficiency in English, particularly with regards to professional or technical texts. Upvotes: 2 <issue_comment>username_5: I've seen a couple of variations of this pattern in articles I have authored or coauthored: * The most useful variation is where the reviewer points out specific places were grammar might not be correct, and specially places where those mistakes can lead to misunderstandings. Just write the paragraph you are referring to, and your preferred rewording. That helps the authors see how you are understanding their manuscript. * Also useful is stating potential terminology issues, where something is being referred to with terms which are understood differently by the target journal or community. Those can be alluded to, so that they can be fixed throughout the manuscript. In the cases above, I feel there is no need to call for a ”proficient English proofreader“. However, if the manuscript: * Is difficult to follow, because the grammar might be correct, but contorted; * Makes you feel unsure about the claims, because of the writing; * Is utterly opaque… Then say so, and use @username_1's suggestion of professional grade proofreading. Ps. I have also seen the ”please, let co-author X proofread the manuscript“, when X is known to be a proficient (usually native) English speaker, but doing that is both a slap to the main author, and to X, because if s/he is a co-author, s/he should have already done so. Only write that when you're sure of the implications Upvotes: 3 <issue_comment>username_6: I disagree in a subtle but significant way with many of the current answers. I absolutely agree that it is the responsibility of a reviewer to point out language problems. I strongly disagree, however, that the reviewer should tell the authors how to solve those problems, any more than they should tell the authors how to solve their scientific problems. The reviewer should say something like: > > This paper has numerous grammar and language issues, which need to be addressed. > > > Then it's up to the authors to figure out whether they need a native speaker, a professional proofreader, simply more care on the part of the current authors, etc. If you presume to diagnose *why* the paper is the way it is, it is just as inappropriate as if you said "the authors need to enlist the help of somebody who know statistics to improve their data analysis." Even if you *do* know why and it would somehow be appropriate for you to say to the authors in person, remember that reviews are generally blind, and the authors can't tell you from some random jerk reviewer #3. In short: constructive reviews state the problem, rather than presuming the solution, and this applies to language as well as technical content. Upvotes: 4 <issue_comment>username_7: Although I consider myself fluent in English (having lived most of my adult life in English Canada), technically I am not a native speaker, as my mother tongue is not English. On more than one occasion as a reviewer, I advised authors to have their paper reviewed by a native speaker of English. I always understood that phrase to mean either someone whose mother tongue is English or someone who speaks (and writes) English at a native level, but not necessarily someone whose birth certificate was issued in an English-speaking jurisdiction. I feel compelled to do this when the paper in question contains a large number of grammatical mistakes or odd turns of phrase, too numerous for a reviewer to list or correct (if there are only a few mistakes, I just point them out and list them); and especially when the journal in question is not known for its high quality copy editing. What good it does, I don't know. But I certainly never felt that it was inappropriate for me to offer this advice. Being an immigrant myself, I don't think that I am prejudiced against non-native speakers. I have done this both for papers that I recommended for rejection and also for papers that I deemed suitable for publication. Upvotes: 3 <issue_comment>username_8: I, as being myself not native English speaker and as a person who has published in a respected journal, would recommend you to politely advise the authors to proofread their paper by a native English if *their target journal* is published in an English speaking counrty and/or has the requirement for good language. Some journals (for example, respected British journal) do have this requirement. Upvotes: 1 <issue_comment>username_9: I always find it useful to tell these type of issues to the editor. If you tell the editor that the document is poorly written in terms of language use, I believe that the editor will find a way to forward your message properly to the writers. Upvotes: 2 <issue_comment>username_10: I do accept 'Nat' comments. As a non-native english speaker, what english do we have to learn. British, American, European, Austrilian or Indian? I strongly feel that content is very important and then grammer comes next. If you suggest/correct the sentences, it is more appreciated, that is what I do normally when I review papers. Upvotes: 0 <issue_comment>username_11: You can ask them to do that, but it may not be appropriate. As an experienced native English proofreader who is very familiar with the peer-review process, I can say that often what is needed is not a proofread at all but rather a multi-stage substantive edit followed by a proofread. There are some proofreading companies that will give you a proofread if you ask them for one; I, and most other freelancers, will ask to see a sample of the work and then suggest what the most appropriate service is before quoting on providing that service. The requirement for native English versus non-native is a different question, and I would say that for publishable work it is normal to require a native speaker. Hope that helps. Upvotes: 0
2015/12/24
713
3,029
<issue_start>username_0: I am in the middle of writing my bachelor thesis and recently submitted a paper that I wrote with my supervisor about the same subject to a conference. If it makes any difference, I should mention that I am the first author. I would like to use the paper (15 pages) as the basis for my thesis (about 40 pages), i.e. I want to re-use most of its content without changing the wordings. My supervisor told me that it is okay and I have read that it is even common practice among some institutions to reuse your own previous work for a bachelor, master or PhD thesis. However, I am not quite sure if it is acceptable to reuse paragraphs which my supervisor wrote as co-author to that paper. In the Statutory Declaration that has to be attached to the thesis it is stated that I wrote the thesis just by myself with only the help of the cited material. Do I have to reformulate the ideas in this case?<issue_comment>username_1: You definitely can't present words someone else wrote as your own in your thesis. While it is common to reuse prior work in theses (as long as that work wasn't submitted for credit previously), things are obviously complicated when that prior work has co-authors. I don't have experience to say what would be acceptable, but I would be very cautious. To stay safe I'd advise not incorporating any of that work directly, but instead citing it as you would any other source. But that probably means you won't get fifteen pages out of it... Upvotes: 2 <issue_comment>username_2: Do you have a supervisor for the BA thesis? If so, that's the person to talk to. Assuming you get to choose a supervisor, it'd be sensible for you to pick the person you co-authored the paper with. Presumably, they will have no objection to using the text in full for your thesis (including the parts they wrote). There's no issue with plagiarism, unless your institution has an unusual interpretation thereof. The point of rules against plagiarism isn't to make you pointlessly rewrite your previous work and in any case, co-authored papers don't allow for distinctions between the paragraphs you wrote or the ones someone else wrote, even if you personally can tell them apart. It's also expected that a supervisor provided input to the thesis and (at least in the US) it's not unusual for that to include substantial editing or clarification. Especially if the thesis is good enough to be published down the road: might as well get started on the process early. You address this by thanking your supervisor in an acknowledgment section, along with other people who read your thesis or provided ideas. Taking plagiarism rules literally would lead to absurdities like citing "personal communication" over coffee or drinks. If someone provides a really cool idea and you use it, it's polite to thank them in a footnote; but that's the extent of it. The point of the rules is for you not to go take an idea from someone else's paper, then claim it as your own insight. Upvotes: 4 [selected_answer]
2015/12/24
487
2,128
<issue_start>username_0: I have submitted a paper to an Elsevier journal. They sent the first decision on my paper as a major revision on October 28 and gave me two months to revise the manuscript. Currently, the deadline is so close while I am stuck in a problem with my manuscript. I sent a request of extension on the night of December 24 to the editorial office of the journal. Now, I am worried about not hearing from them before missing the deadline due to the holidays. Is their office closed on December 26? (I think their office is located in the UK.) What is your suggestion in this case? If I miss the deadline without no hearing from the editor due to holidays, the editor will extend the deadline after seeing my request sent on December 24?<issue_comment>username_1: Let me recapture my understanding of your situation: * You cannot make an appropriate resubmission in time. * You informed the appropriate person of this fact (hopefully giving appropriate reasoning). I further assume that: * The person who will decide whether you get an extension can decide on a per-case basis using common sense and is not dogmatically bound to rules. * Further communication cannot possibly have any positive effect. Then, there is nothing you can do, except hoping the person deciding upon the extension is reasonable and decides in your favour. Whether this is actually the case is something that we cannot predict any better than you. Whether the journal office is open should hardly matter (and again is something that we cannot tell any better than you). Upvotes: 4 <issue_comment>username_2: In my experience, these deadlines are more "suggestions" than hard deadlines (as always, this may be field-specific). If you miss them by weeks with no communication, that's a problem and the associate editor is going to be annoyed and bug you. A few days (or anything reasonable with communication) shouldn't be a problem. After all, the journal has invested resources in the paper already and, presumably, wants to publish it. They'd rather you send work that doesn't require a second round of revisions. Upvotes: 3
2015/12/25
2,681
10,947
<issue_start>username_0: It's the first time I'm recruiting for a PhD student. I advertised the position and have received a dozen applications and there are still 2 weeks left on the 4 week announcement. What caught my eye is that I have two applicants who are already PhD students at other (lower ranking) universities. They both have very strong CVs and even some good publications. However, they haven't included any references from their current university or the reason they want to leave from there. One is a second year PhD and the other a third year. Both have similar topics to the one I'm advertising for. > > * Is it OK if I ask them directly why they want to leave and why they haven't included any references from the place they have been for 2-3 years? > * Should I try contacting their current supervisor and ask for info? I even know one of them personally. I don't know if I should (could land the student in trouble). > * In these situations, don't you feel that this person could suddenly leave if he/she finds a better position? Can I really trust him/her to stick through a 4-year project? > > > P.S.: My wife switched to another supervisor 6 months into her PhD because she really couldn't get along with her 1st supervisor, so I understand these things happen. I want to give these candidates a fair chance, but I want to cover my bases as well.<issue_comment>username_1: I would contact the students directly for answers before contacting the supervisors. This way, you can get some feedback and further information before deciding if you want to contact the supervisors. However, I would in any case *not* take a student from the lab of an acquaintance without the "blessing" of the acquaintance. Otherwise you risk setting up an unnecessarily antagonistic relationship between you and the other advisor. Upvotes: 5 <issue_comment>username_2: To get a feel for this situation from the student's point of view, I suggest reading questions such as [Changing PhD programs: should I submit a recommendation letter from my old advisor if it's not purely positive?](https://academia.stackexchange.com/q/46053/10220), [Incompatibility with the PhD advisor](https://academia.stackexchange.com/q/2106/10220), [Switch PhD program: how to contact possible PhD advisors when already enrolled in PhD program?](https://academia.stackexchange.com/q/17348/10220), and many others. The decision to try to switch programs is a difficult one for the student. They may be worried about harming their relationship with the current advisor if the move does not work out. From this point of view, contacting the advisor, rather than the student, would be like calling a job applicant's current employer. Although if they asked here they would probably be advised against it, there may be a temptation to just not mention the current PhD program other than to the extent that they have worked as a research assistant. --- The key question you need to ask both the students and yourself is "What would be different about my project compared to your current one?". Of course, there are no guarantees, except that you will not be able to get through a reasonably interesting life without embarrassment. For example, see [How to explain in PhD interview about leaving current PhD due to lack of funding?](https://academia.stackexchange.com/q/55615/10220). If you have good funding, and the student is switching because of lack of funding, there is no reason to expect them to be particularly likely to leave. The OP for that question wrote "I did not mention my current PhD in CV, and just wrote about my working experience in the last 1.5 year.". I have no idea why students do that, but it seems to be a common urge. Many questions either discuss the advisability of not mentioning incomplete degrees when applying for a PhD program, or are trying to deal afterwards with the consequences of not mentioning them. Upvotes: 5 [selected_answer]<issue_comment>username_3: Like others have written, there may be uncomplicated reasons that motivate the student to leave (like lack of funding, or a position that was advertised as PhD position but is incompatible with a PhD due to the work-load). But let's suppose it's a problem with the supervisor: 1. **Ask students?** Yes, ask, but it's a touchy subject. After all, bad supervisors do exist, and some are surprisingly "good" colleagues. Criticizing a supervisor is usually seen as disloyal (talking among PhD students in a bar excluded). Which is a problem if the loyalty is one-sided and the supervisor is exploitative or abusive. (Then again, there are students who have rather strange ideas what a supervisor should do for them so things aren't always clear-cut.) So I'd ask for their reasons for leaving (Funding? Did things get tough and the grass looks greener here? Do they want a higher-status position?). And if it's an issue with the supervisor, I'd go for a student who states it rationally, sees the interaction as two-sided (even if the other person is a jerk), and -- above all -- maintains a basic level of respect (as a sign of maturity). 2. **Ask the supervisor?** Like others have written and you suspected, ask the student first. Yes, it's better if things happen in the open, after all, that's the polite thing to do and Academia is small (well, the respective sub-domain is). But there is also a power-imbalance at work here. Here I'd simply ask why they did not include any references (might have been an oversight), and if it's a problem with a specific person, whether there are other people who can be contacted (not being able to work with a supervisor is one thing, not being able to work with the whole department another). 3. **Are they quitters?** Depends on their reasons for leaving. Actually, I think it's a sign of competence if a person leaves a bad position. The worst situation you can be in is when you are exploited/abused with no chance of establishing yourself as a scientist, yet feeling unable to leave due to escalating commitment/one-sided obligation. So, it is an accomplishment to leave a dysfunctional environment, esp. if you naturally feel a strong commitment to a position. But yeah, in every hire there is risk, and there are concerns. But I would look at the qualifications and treat the history as one factor (dispassionately, you are not responsible for their situation and they have to fit in your environment). I'd also ask the following questions: 1. What did they learn so far? 2. What do they think they need to finish their PhD successfully (and is that realistic)? 3. Are the habits they have acquired helpful? 4. Can I work with them? 5. What are their plans for the future? Upvotes: 2 <issue_comment>username_4: I think I am in the exact same situation with these two individuals. I work as a research assistant(!) and PhD student in Turkey and I have applied many positions across Europe. The exclamation mark indicates that my *title* is research assistant, but what I do is simply proctoring and assisting the courses. No one expects me to conduct research, and more importantly, I am only as valuable as I carry on proctoring. Therefore, as any human being would do, I want to work as a happier person and do something that I enjoy in my life. Regarding to your questions: > > Is it OK if I ask them directly why they want to leave and why they haven't > included any references from the place they have been for 2-3 years? > > > Cannot argue about that. This is a fair point. > > Should I try contacting their current supervisor and ask for info? I even know one of them personally. I don't know if I should (could land the student in trouble). > > > I think it is always better to know the applicant better, if you have any chance. Maybe there will be no positive feedback, but at least you may understand why the student wants to leave, based on the response of his/her supervisor. > > In these situations, don't you feel that this person could suddenly leave if he/she finds a better position? Can I really trust him/her to stick through a 4-year project? > > > I certainly do not. In my situation, what I want to do is pure research. If I somehow get a position that I have a lot of time to do my actual job (research assistantship), I would not risk with a so said *better* university. So, that person might have some reasons to leave, and if these reasons are gone, they probably will not even think about quitting. Upvotes: 2 <issue_comment>username_5: > > Is it OK if I ask them directly why they want to leave and why they haven't included any references from the place they have been for 2-3 years? > > > I think the most important thing to get clarity on is what kind of work environment the student expects. You don't want to end up taking in a student who may later quit for the exact same reasons. Expectations of students in academia can vary wildly (and honestly, many research groups, even highly successful ones may not be bastions of even good management practices!). As to why they include no references from the current position, I mean the fact that they are willing to transfer (and delay their PhD completion time) suggests that they are unhappy with the work environment, and may additionally fear retaliation if their current PI is contacted. I would suspect that they have a conflict with the PI or working environment, and that this isn't just over funding (over which I think most PIs might even help their students find new positions). Upvotes: 1 <issue_comment>username_6: Sometimes the situation is so much unbelievable that it totally changes the student's life. Currently, I am a Ph.D. student at Texas A and M-Kingsville University. I was offered a teaching assistantship position. My visa is in processing for the last 9 months. As a result of this situation, I have to take online classes with a partial fee. The dean assured me that they will support me in the first year despite my online classes. My supervisor is young and very helpful. I really want to work with him, but he has a lack of funding in the current semester. Dean denied issuing any funding, although he promised that I will have funding in my current semester. Now, I have realized that the offered funding is not enough for me and my family. And secondly, the situation is extraordinarily complex regarding visas, knowing nothing when they will issue the visa and what will be the result. I have 24 publications including 10 SCI index research papers. I am hopeful that I will get funding from another Professor, but I do not want to cast a bad image on my advisor because he seems very helpful. Now, what should I do in this situation ? So, some things are very complex at student side. As, a student I would suggest you to contact students first and understand the situation from them. Reconfirm their details from you own sources and that's it. Regards, Upvotes: 0
2015/12/25
357
1,373
<issue_start>username_0: I have been working as a Research Assistant in a university for 2 years under the supervision of a supervisor who has a doctoral degree. I have fair knowledge about research and a first class for my Bachelors Degree. I wish to apply for a PhD in United States. Is it possible to secure a PhD opportunity without a Masters Degree?<issue_comment>username_1: No, in a rough manner... There are a multitude of Ph.D. programs, presented by US universities, within which the M.Sc. degree would not be as a mandatory factor for entrance the target program. As a matter of fact, direct transition from B.Sc. (first-class or 4-year one) to Ph.D. would be a research plan for the future action within the academia, where one often seeks a M.Sc. degree for the participation within the industry. All in all, you better check the admission requirements of the target programs, by case, to realize the necessity... Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Is it possible to secure a PhD opportunity without a Masters Degree? > > > Yes. At my institution this is the most frequent applicant profile in fact. Upvotes: 2 <issue_comment>username_3: At my university, you're at a disadvantage when you apply to a PhD program when you already have a masters. So essentially, yes it's possible and more than likely preferable! Upvotes: 1
2015/12/25
834
3,620
<issue_start>username_0: The question says it all. To add some relevant information, I must note that * I published some articles in national journals * The level and the process of peer review in some of those national journals was quite low or even nonexistent; yet they claim to be double-blind peer-reviewed and are in the top of their field according to a journal list our Academy of Sciences use when evaluating research performance * I intend to make available publicly the substandard reviews, or, in the case of non-existing external review processes, my correspondences with the handling editor on the internet Now it seems obvious for me that no permissions will be given to me by the journals if I ask for it, so I'm not sure whether I should ask them anyway. But would that be a bad move; legally speaking?<issue_comment>username_1: Unless you signed a very peculiar publication agreement, you are under no obligation to keep the reviews your received confidential. Moreover, there is probably no issue with copyright infringement either. Publishing a modest amount of text, for the purpose of critiquing it, almost certainly qualifies as "fair use" of potentially copyrighted material. (This could, however, vary from jurisdiction to jurisdiction.) So I think there is rather little legal risk of doing this. However, it sounds to me like a very poor idea nonetheless. In most academic contexts, the dangers of doing something wrong come less from formal legal repercussions than from the damage that may be done to your professional reputation. Presumably everyone who has published in these journals understands the relatively lax nature of their peer review (having gone through the process themselves). Unless one or more of the journals has an undeservedly high reputation (which seems doubtful, since you describe them as "national"; prestigious journals typically have international contributors and audiences), publishing accusations of laxity against the journal are not likely to affect the journal at all. On the other hand, doing something like that can definitely get you a reputation as an obnoxious troublemaker, which can be extremely damaging to your career. Upvotes: 3 <issue_comment>username_2: There is generally no need to obtain permission if you are intending to excerpt portions of a review as part of a critique. That would fall squarely under the ["fair use"](https://en.wikipedia.org/wiki/Fair_use) aspect of copyright law (at least in the US). An recent high-profile example of doing exactly the sort of thing you are asking about is [the sexist PLOS ONE reviewer whose review said to get a male co-author](http://news.sciencemag.org/scientific-community/2015/04/sexist-peer-review-elicits-furious-twitter-response): the angry author posted a few lines of the review, which was sufficient to get a firestorm going on social media. I would advise against posting a lengthy review that you are not commenting on in detail, however. It is both a stronger argument and also a more defensible legal position to post only key excerpts, and a whole review only if it is very short and that is the point you are commenting on. If that critique of key elements then brings requests for the full review for context, then it would be reasonable to post the full review. While there *might* be some question about copyright in such a case, it would be a foolish journal indeed that would complain after your critique has already met a significant reception, as that would essentially guarantee a [Streisand Effect](https://en.wikipedia.org/wiki/Streisand_effect) backlash. Upvotes: 2
2015/12/25
1,084
4,342
<issue_start>username_0: I am in a PhD program, and all I need to do is defend. There is still a little more work I need to do before this, but my degree is essentially finished. I have totally completed all other requirements (classes, qualifying exams, proposals, publications, etc). In my field (engineering) it is common to immediately begin applying for jobs after the proposal. So, after many interviews, I was offered a job. They want me to start ASAP. I am thinking this will push my PhD defense back 1 to 1.5 years. Anyways, on my resume, linkedin, etc I would rather say me PhD was 2009 - 2015 (6 years) than 2009 - 2016/2017. (7-8 years). Would it be acceptable to state that this year (2015) is my graduation year? I feel I'm selling myself short is I state 2016 or 2017 as my graduation year.<issue_comment>username_1: No, you can't state that 2015 is your graduation year. The year you graduate has a well-defined meaning -- it's the year the university confers your degree to you. Period. Writing that 2015 is your graduation year would be the same, ethically, as writing that 2013 is your graduation year just to make your resume look more impressive. Your resume will, correctly, indicate that you're working at the same time as completing your degree, which should be impressive enough for anyone doing bookkeeping on your dates. Upvotes: 6 [selected_answer]<issue_comment>username_2: You haven't graduated, or even fulfilled all the graduation requirements: it's unlikely but possible that you could fail the defense, and it sounds like you haven't even quite completed everything you need to do before the defense. Otherwise why would it take 1 to 1.5 years to reach that point, even while working full time? You can give an honest description of your current progress towards finishing the degree (for example, that you have completed all requirements except the formal defense, if that is the case), but you should not exaggerate or suggest you have already graduated. Upvotes: 4 <issue_comment>username_3: It's your CV, and you can write what you want on it. However, if you are asking whether it is any way honest or acceptable to others to list a graduation date which is not the accurate one because you think it will make you look better: of course not. That would be a form of academic dishonesty, and if you give inaccurate information to a future employer, it could itself be grounds for your dismissal. There is some doubletalk in your question itself. No PhD degree is "essentially finished" before you finish and defend the thesis: that is the single most important (and most challenging) part. Moreover the idea that you are "essentially finished" is undermined by your estimate that your graduation date could be put off for a year or more. If you are really finished, then you can arrange to come back to defend your thesis...on a weekend or during your vacation time if need be. A much more serious, responsible thing to do would be to discuss this with your employer. If it is common to look for people who say they are in their final year of their PhD and ask them to start working before they defend and graduate, then they must have some policies and opinions about how to do this. Moreover, since they are hunting among "almost PhDs" it seems very likely that they want you to get a PhD, which means they should act in a way that will allow you to finish the degree. If that is not the case, then having the PhD is not very important to them, and if it is important to you then maybe you should reconsider whether this is a good fit. One final piece of advice: you are worried that by taking the job your graduation date will be pushed back unacceptably. In my opinion this is not a serious worry: people receive a PhD 1-2 years later in many circumstances. Even for an academic career taking a little longer to do a PhD is usually not a real issue. For an industrial career *I think* it would matter even less. Your real worry should be: **are you sure you are going to finish your PhD at all**? Going from "essentially finished" to "1 to 1.5 more years" makes me concerned on your behalf. Please make an explicit plan which leads to you getting your PhD in year X (whatever X) is before you accept the job...or accept that you may well be abandoning your PhD by taking the job. Upvotes: 3
2015/12/25
854
3,538
<issue_start>username_0: I was supposed to do a project for one of my classes and I talked to one of the math professors at my school for ideas and he was excited about my project and even got one of his grad students involved. The thing is, I didn't do well in the class and I didn't get around to finishing the project to it's fullest potential and so I never got back to him (the math prof) about it because I was too embarrassed. I'm really paranoid now even when I'm just in the building he works in and it sucks because I'm a mathematics student so I'm there a lot. I don't want to run into him or his grad student because I don't want them asking me about the project. The semester's over now and I don't even know if he'd remember me. I just feel like I've let him down and I really do want to work on the project I just didn't understand what equations and stuff to use after so it was just a mess by the end. Anyway, I'm just wondering what I should do about this? Should I email him? Should I just avoid him? I may want to work with him for research but I don't want him to remember this cliff hanger relationship. Thanks!<issue_comment>username_1: Go talk to the professor, and tell them exactly what happened. Worst thing than can happen is that you get thrown out. Not exactly pleasant, sure; but at least you get it off your chest. And you *might* get another shot at the project. Upvotes: 3 <issue_comment>username_2: My understanding is that you proposed a project to a professor, who was not teaching the class you did poorly in, and just didn't make as much progress as quickly as you had hoped. If that's the case: welcome to academia! This is a line of work where most of the things that seem simple take forever, and some of the things that seem difficult can be resolved pretty quickly. That is, a domain where it's quite difficult to come up with a timetable and stick to it. If you want to work on the project still, set up a meeting and talk about the bumps you hit. He's likely to give you some ideas on how to overcome them, especially now that the project is no longer for a course grade. Commit to a follow-up meeting so that you have a deadline and hopefully some concrete goals to work toward. I wouldn't apologize for not being in touch: there's no need to be defensive. This is your project, so you're in charge of prioritizing it. He wasn't sitting in his office waiting to hear from you. Upvotes: 2 <issue_comment>username_3: Don't worry. The professor didn't invest much, so he won't feel let down. You don't expect students to do real work, so at worst the professor will treat you like an average student, and you lost the "student I will try to get in my group"-status, but if you want to work in this area, you can get this status back rather quickly. It is likely that he does not care about your classes, and does not know when you are supposed to finish your project, so if you come back to him a few months late he might not even notice. One way of dealing with your problem would be to set up a "chance meeting" - which shouldn't be too difficult, as you know (or can easily find out) when and where the professor lectures, attends talks or whatever. Then ask, whether you could come to him to talk about your project. Most mathematicians are happy about every student who wants to talk about mathematics, so if you manage to move the focus from "assignment for a specific class" to "math problem you are interested in", you should get an appointment almost instantly. Upvotes: 1
2015/12/26
1,952
8,287
<issue_start>username_0: I recently earned an undergraduate degree in computer science\*. I had no interest in going to grad school, so I focused purely on coursework. I have no research experience. I am now more open to the idea of further education, and I think life will be more fulfilling if I produce knowledge than if I just apply my knowledge in developing a product. **How can I get started on research** in a way that will (1) let me know if it is something I want to spend my life doing, and (2) pave the way to a PhD in CS if I decide it is something I want to do? (I ask this question under the assumption that applying for a PhD is off the table for now because I do not yet know if I even like research, and even if I did know that, I have no research experience to show off and no one to write letters of recommendation that vouch for me as a researcher.) Could I... * Get a master's degree? (Tuition and moving out of state are a steep price to pay just to see if I like something. Plus, I doubt my potential to get into a good program due to a lack of CS professors who know me well enough to write a letter of recommendation for me.) * Enroll in a post-baccalaureate program with a non-CS major (e.g. math) just to be an undergraduate again and have professors to do some research with, even if I do not intend to complete another undergraduate degree? * Get an entry level software development job and move from there into R&D? * Get a job as a research assistant for university faculty? (Or volunteer to do it for free?) Whatever the answer, I need the opportunity to prove myself to someone who can write a letter of recommendation that is taken seriously for PhD admissions. (I originally titled this question "How to get LoRs with no undergraduate research experience" until I realized the scope of my question was much wider than how to get into a PhD program. Getting LoRs is still essential, though -- if I decide I want a PhD, then I need to have a shot at it.) \*Or rather, I will in six months -- this question is from a future perspective. I am also considering the options I have before graduating, but those are for another question. The point of *this* question is: if none of those options work out, then what other options will I have left?<issue_comment>username_1: One method you can try is to delay graduation by one year and start working on a research project under the guidance of a faculty member in your CS department. If no such undergrad research program exists in your department, it may be possible to arrange something with individual professors (under the name of senior thesis, independent study, or on-campus employment, etc.) who are interested in taking on a student. Another good option is to find an internship or research assistant position in another university after you graduate. However, even such opportunities usually come with the expectation that you have some research experience to begin with. So a more realistic approach may be to start doing research now in your undergrad institute as a stepping stone while you are still a student, and then apply for an external internship/research assistant position with the help of your research supervisor. Getting a master's degree is a possibility. If you choose this approach make sure that you are not getting a terminal master's degree that is intended to prepare for professional work in, for example, software development, rather than to prepare for further PhD studies. Finally, getting a job in software development is probably not a good idea, because usually the practical experience you get has little direct bearing on the content of a PhD study. Spending one or two years coding does not help to develop your ability to do theoretical research. No matter which approach you take, the key to a meaningful internship is to find someone who is 1) well-established in the academia and 2) good with mentoring. Ideally, you want to spend 1-2 years in this person's lab, so that you have enough time to grow as a researcher and demonstrate your ability (and potential) to consistently produce good research work. You may want to talk with some professors who are familiar with you for recommendations, or talk with students in your prospective mentor's lab to get to know the professor's reputation. Remember that it is not enough to simply spend 2 or 3 months in a lab just so that you can pad your resume with some extra fluff. Upvotes: 2 <issue_comment>username_2: > > How can I get started on research in a way that will (1) let me know if it is something I want to spend my life doing, and (2) pave the way to a PhD in CS if I decide it is something I want to do? > > > I get that you only recently determined that you may want to do research. However, the fact that you haven't already sought out some research projects with faculty at your school may suggest some degree of a lack of seriousness on your part about really finding out if research is right for you or not. That said ... You are still enrolled in your undergraduate program, and you have 6 months left until you graduate. So I think a logical first step would be to **talk to professors at your school now** to see if they have any projects you could work on for the remainder of your time at your undergrad institution. While you may or may not obtain any research breakthroughs during a 6-month time frame, this will be a great way to learn very quickly if research is right for you, and could even help you narrow down your areas of interest. ***Time is of the essence; don't let the last 6 months of your undergraduate career go to waste!*** Looking beyond your undergraduate career, there are a few options to consider. Some of the options that you mentioned in your post don't really make a whole lot of sense to me. As I see it, here are the two main, logical options that you have going forward: * Get a job related to your subject area in a city that has a decent graduate program in the field you are interested in, and enroll into the grad program as a part-time Masters student. If being a part-time student while working full-time doesn't appeal to you, check with faculty to see if they'd be willing to work with you as a non-student. Either way, **get involved** with those faculty who are doing the types of research that you find interesting. * Skip going into industry and go straight into a full-time Masters program. Don't get caught up into the same cycle you are in now, where you are only focusing on your coursework. **Be very proactive** with the faculty and try to scope out a project where you can really get exposed to research and what it is all about. For either of the above options, so long as you do good work, and if you decide that research is what you want to do, then obtaining quality LoRs for admission to a PhD program should be a no-brainer. The key decision that needs to be made is whether you should go into industry after graduating with your undergrad degree or not. **This is a very personal decision, and nobody can make this decision for you.** I did my Masters degree part-time while working full-time. It's not easy, and nobody said it would be, but it was the way forward that made the most sense for me at the time. In my case, I felt like I needed to do some industry work before I did any research to 1. acclimate myself to where my subfield was going, and 2. figure out how I could plug into that. Whether to go the part-time or full-time Masters degree route is a very personal decision, and the right path for you depends on your preferences and the particulars of your situation. In the comments, you wrote: > > I am looking at undergraduate research opportunities, but six month is a very short time, and there is barely any undergraduate CS research at my school. I want to know what other options I will have. > > > Again, I am concerned that you are not quite up to the challenge of really finding out the answers you are seeking; stop looking for excuses and start attacking the problem *now*. In the end, I can't stress the following enough: There is no substitute for being **very proactive** and **highly motivated** to find out for yourself if research is really right for you or not. Upvotes: 2
2015/12/26
1,125
4,960
<issue_start>username_0: I am from computer science field specifically from Computer Vision field. I need suggestion if I should list my paper which I submitted but not yet got any acceptance/rejection. I am more confused due to the fact that the conference's review process is double blind. I would not include that in my CV, but true to say my CV is not so rich yet with publications.<issue_comment>username_1: No, submitted conference papers do not go on a CV (irrespective of the review process). To the extent that getting a paper accepted at a competitive conference is valuable, that value comes from being accepted. Anyone can, after all, submit a paper to any conference or journal. You can list your paper under an *In Progress* section. This is particularly valuable when you have a version of the paper available that you can link to. Without a link to a paper, it's difficult to guess what stage the project is in, but it can still be a good way to show what you are working on if you don't yet have related publications. Upvotes: 1 <issue_comment>username_2: The biggest factor here is the stage of your career. If you're a grad student, for whom a single submitted paper might be a very large fraction of your research output. it's OK to mention it in a CV, provided you're upfront about its submitted status. However, beyond that it's counterproductive, and should be avoided. Upvotes: 3 <issue_comment>username_3: My experience is that "yes, it can be helpful", especially if you have good reasons to think that the papers could be accepted. When applying to both of my postdoc positions I mentioned submitted papers in my CV. It just happened that these yet-unpublished papers were more relevant to what the positions were looking for, compared with the rest of my publications. The admission committees were apparently interested in these submitted-only papers, and had questions about them either during the interview or at some point during the admission process. In one case I proactively emailed the responsible faculty member to notify him that the paper was accepted (we had had an email exchange before, so it felt natural to update him). Overall, I think that having these papers visible in my CV helped me to get the positions. A couple of obvious caveats: 1. Make sure the submitted-only papers are very cleary distinguishable from the rest; put them in a different section at best. 2. If the papers end up being rejected before you have the admission interview, having them *may* reflect on you negatively. Upvotes: 3 <issue_comment>username_4: Applying for jobs is the one time that listing submitted papers (or grants) *may* make sense. You may more effectively communicate who you are, and also who you see yourself becoming, including who you are working with. That could be useful to the committee. If you choose this strategy *very* clearly flag it. I would preface the item by **unpublished** (bold face) and follow it by "under review by [journal/conference name]," "to be submitted to [name]", or even, just "in prep". I would also clearly state in the cover letter that I'd taken the unusual step of including unpublished papers and why (e.g. to show that I was working on moving into the areas of interest to the committee, or that would qualify me for the recommended criteria for the position.) I'd also say when I expected to learn about whether the papers were accepted, so that they could ask about that in the interview or what not. It's good that you worry about the double-blind review process, but a job is IMO more important than a single publication, so it's worth risking having to resubmit it somewhere. It's probably fairly unlikely your CV will be seen by one of your reviewers (unless you're applying into the lab of someone who might review it). What you might worry about more is whether the committee will believe you (there's no way to verify claimed in-progress publications), or whether *they* do not think it's right to include unpublished papers. Most committees like as much information as possible, but some view anything deviant as suspicious. As with any job application question you should *always* **ask the person named in the advertisement for "informal queries"** about this sort of thing. That's exactly what they are listed for. Also, contacting the assigned contact point with a sincere question improves the chances they will notice your CV. If nothing else, they know you can follow instructions and are communicative, both important skills in academia. Upvotes: 1 <issue_comment>username_5: Yes, you can. But you have to be more careful. In my CV, I have a separate sub-section under Publication, called "Pre-prints/Working Papers". In which I have added all my submitted papers. For each submitted paper, I add the author name(s), title of the paper, and instead of the conference name, I just mention "Submitted to major CS/ML/CV conference, under review. Upvotes: 0
2015/12/26
1,299
5,532
<issue_start>username_0: I am an international student applying for Computer Science PhD in US. Some universities I plan to apply to require Personal History Statement in addition to Statement of Purpose. I believe that I described my research interests, talents and motivation in my SOP quite well. I happen to be pretty "diverse" person as well, so I wrote about it in my PHS. The only drawback is that my PHS reveals some details of my background that may serve as a "red flag" of my lack of intention to come back to my home country after I graduate. I know this is not a good thing for visa approval, but I wonder what kind of attitude the university departments have to such students. I received a lot of criticism for this approach. People say that I must not appear to be desperate and should avoid saying anything that is not directly relevant to my interest in Computer Science, but I believe that my history is an important part of my identity (and a chance for alternative funding sources), and I'm out of ideas of what else I can write about Computer Science that is not already mentioned by my SOP, Resume and recommenders. So what would you suggest? Should I omit any details of my history that may raise suspision of my immigration intention?<issue_comment>username_1: People who get a Ph.D. and also try to immigrate are generally quite welcome in US academia. The problem you may run into is that some people who just want to immigrate sometimes try to use a Ph.D. program as a visa source without having any real interest in a Ph.D., and then drop out when they find a company that can hire them. This is understandably annoying to professors, who may have spent quite a bit of time, energy, and money on such a student, only to find the student has been disingenuous all along. Thus, if you "smell" like such a student, you may find difficulty getting admitted. If this is your intention, don't do it. If you truly want a Ph.D., however, and the immigration is a separate desire, then I would advise you to just focus on the things that are about getting a Ph.D. Let your passion and your focus shine through, so that's the main thing that the professors notice. Don't worry about trying to *hide* your desire to immigrate: if it comes out naturally as a small part of your statement, that's OK, since the professors reading it will likely be guessing you want to immigrate unless you explicitly say otherwise. Upvotes: 5 [selected_answer]<issue_comment>username_2: The majority of foreign students would like to immigrate to the US after completing their degrees. Most faculty have no problem with this, but then again we're in the business of educating students, not smoothing the pathway to immigration. I wouldn't discuss your immigration plans in your application for a PhD program. As others have mentioned you will also have explain to US government officials why you want to study in the US when you apply for your F1 student visa. You must have the intention of leaving the US after your studies in order to obtain this visa. Saying that you ultimately intend to immigrate to the US will most likely result in denial of your visa. Upvotes: 2 <issue_comment>username_3: To reiterate the point of the other answers and comments, without this getting lost in a long suite of comments... During the Bush administration, the INS was very strict about students' promising and giving some evidence that they'd return to their home country after degree completion... no matter what the regime was there, etc. Some kids from PRC had trouble because they could not adequately document that they had family in PRC giving them motivation to return! (I've been involved with grad admissions in math for the last 30 years and have seen the evolution of visa issues...) Also in that era, a Lebanese student who'd gone home to visit was detained (by INS) for some days without any charges, etc! As far as I know, in principle the same rules still apply, but, as always, the question is about the degree to which they are enforced by INS. As in some other comments, the illogic or self-inconsistency of the supposed need to declare no interest in immigrating... seems to be irrelevant to INS. The separate issue of departmental/program reactions is less artificial. Still, over the years, a significant number of grad students have made clear, once here, that they would have taken any available route to get into the U.S., and, yes, this is a bit annoying, even if understandable, because it causes our departmental resources to be used somewhat less efficiently than we'd hope. Maybe this is inescapable...? Nevertheless, we are reconciled to the fact that a certain rate of grad students discover after a year or two that they don't want to do PhD's in math, and that's fine. It's not obligatory, etc. If anything, the rate of attrition among domestic students may be higher than among international students, but/and we attribute this partly to the fact that a typical B.S. program in math in the U.S. does *not* have as much substance as many B.S. programs elsewhere, much less M.S. programs elsewhere, so there can easily be a larger element of surprise for U.S. undergrads going to grad school. In summary: instead of thinking about "lying by omission", consider that "telling the truth" loses some (even more?) of its pretended-objective nature when the *context* in which questioning occurs is highly contrived, so that words take on different meanings and have surprise consequences... Upvotes: 3
2015/12/26
3,090
13,907
<issue_start>username_0: **TL;DR:** Is it normal that the theory sub-community and the applied sub-community of a single research field to have different goals and different standards for peer review - to the extent that the research field itself suffers? I work in a subfield of computer networking. During the last two years I've been in a quite unusual position (for the subfield) as a person whose job was to bring some mostly-theoretical results of big research project to practice. Personally I've greatly enjoyed this role; however, the job also brought a lot of frustrations. It seemed that neither the theory people nor the applied people are particularly interested in a work that bridges these two sub-communities. In general, there is a very little overlap between what the theory subcommunity cares about and what the applied subcommunity cares about. To give one example, the assumptions made by the people in the theory camp often are wildly wrong. In my opinion, that renders most of their results quite useless; however, they do not spend a lot of effort trying to recognize and correct these invalid assumptions, as it's possible to publish their results anyway, especially in sub-premier conferences. On the other hand, I feel that applied conferences in the field are resistant to novel ideas; a paper that simply demonstrates how to achieve state-of-art results a different approach is not recognized as novel enough. In my opinion, the typical top-level applied paper is therefore quite boring; due to the peer-review standards, they must focus on the engineering work and on applying known ideas to existing problems, rather than on novel ideas and novel problems. To give some other anecdotal examples: * after I reported that a specific algorithm (developed by one of my theory-oriented collaborators) does not seem to give good results on a real dataset, I was suggested that "maybe we should evaluate it on synthetically generated data instead"; * in a different conversation, a senior applied researcher commented that "we skip over any mathematical parts when reviewing a paper". To me it seems very likely that the progress in the subfield was faster if both communities were working closer together. The objective of science, after all, is not to simply produce more papers; it is to discover more knowledge. Are other disciplines facing a similar problem, and how do they deal with it?<issue_comment>username_1: This is so common that there are entire subfields dedicated to bringing theory (and even some applied research) out of the lab and into practice, such as "translational medicine." In my own field (library and information science; I'm a practitioner and educator, not a researcher) there is considerable friction between the research and practice arms, each finding reasons to despise the other. I don't find that particularly healthy, but I've also seen more than one giant unnecessary mess created by researchers sailing in to Fix All The Things without a good enough grounding in the practicalities. (OAI-PMH, enough said.) If there's a fix for this, I don't know it; most of the attempted fixes I've seen (like translational medicine) are pretty kludgy. Upvotes: 3 <issue_comment>username_2: Significant splits between theory and practice, such as you are describing, are not at all unusual. What many people fail to recognize is that research spanning between theory and practice is effectively interdisciplinary research! Moreover, what you call "theory" others may call "applied" and vice versa. I have noticed that whether I feel like the "theory person" or the "applied person" in the room depends strongly on which community I am participating in at the time. In fact, what you are experiencing is only a small part of the theory/application spectrum. Many funding organizations attempt to quantify the theory/application spectrum with some sort of [technology readiness level rubric](https://en.wikipedia.org/wiki/Technology_readiness_level), and you are almost certainly experiencing only TRL 1-3 communities, as that is where almost all academic research lives. Why this matters, and why there is a gap in communication between communities, is that the problems encountered and solutions required at different TRLs are often fundamentally different. At one end of the spectrum is: "Might this be plausible?" and at the other end is "How much does it cost to get one shipped to me by Tuesday?" All of the questions on this spectrum can be valuable, even if some seem "farfetched" and others "boring" to you based on where you personally happen to be sitting on the spectrum right now. Spanning different levels is difficult, and to do so you will need to figure out how to speak the languages of the different communities. If you are able to do so, however, it can be of great value both to your career and to the larger communities in which you live. Upvotes: 4 <issue_comment>username_3: Practice is engineering and building systems that actually work. Theory is science and about understanding the environment and why systems work. There are both social and natural reasons for this. Reality is too messy to be directly analyzed mathematically without lots of simplifications. Figuring out what is essential and what is not is a difficult task. There is mismatch between theoretical models and reality. When is a theoretical model applicable in practice? What assumptions can be made about reality to simplify it and make it amenable to mathematical analysis? Even if we get the models right, our understanding often lags behind what people want to do in practice and seem to be able to do even without understanding why it works when it works and when it fails. Do you want a system that seems to work in practice most of the time even if you don't understand why it is working and when it can fail, or do you want a system that you can mathematically understand and prove results about? Sometimes you cannot have both! There are also social constraints. Theoreticians typically do not have the skills to implement their systems and test them in practice. Practitioners typically do not have the skill to mathematically model their problem and prove statements about them. Both typically lack a knowledge of the language of the other which makes a discussion between them a very frustrating experience. The issue is not simply about individuals. There is little positive feedback for any effort to improve and cross the gap. The exceptions that succeed succeed magnificently but that seldom happens. Most of the time you spend a lot of time on something which your reviewers/peers/managers (people who evaluate your work) do not care about or value. Systematically people learn to avoid those efforts and we end up with people who have a very narrow expertise and no craving to cross the gap. What can be done to improve the situation? There can be more systematic efforts from both communities to value and give actual positive feedback to people who try to close the gap. System conferences can require mathematical statement of the problem being solved and mathematical description of the system it is solved in at the least. It is frustrating for theoreticians to try to figure out what is the exact problem being solved by reading engineering documents and articles. Theory conferences can require implementation of any algorithm developed and a comparative benchmark of the algorithm on a standard data set against other algorithms as well as a description of when the assumptions about the model holds with relevant examples from practice. The communities can push for every person to have enough an understanding of the language of the other community and to be able to hold a constructive beneficial discussion. Upvotes: 2 <issue_comment>username_4: Almost every field is "theory" for some people and "practice" for others. It may be more useful to talk about interdisciplinary vs. intradisciplinary work instead of theory vs. practice. The "normal" way to do research is intradisciplinary. You work on ideas from your field, produce new results, and publish them to other people in your field. The more established the field is, the easier it generally is to evaluate the quality of your work. This is obviously a good thing when applying for jobs or funding. In interdisciplinary research, you take ideas from a theoretical field, produce results in an intermediate field, and publish them to people in an applied field. This can be hard and frustrating, because you need to communicate with two incomprehensible alien cultures. The upside is that even minor results can have significant impact, because true interdisciplinary work is too frustrating for most people. On the downside, the significance of your work can be hard to evaluate, because everything is so incomprehensible for outsiders. I think there may be a natural balance between the two modes. When the intermediate field becomes more established, the incentives start favoring people working within the field. If the field grows too insular, the rewards for interdisciplinary work grow greater, attracting people who can accept the higher risks associated with it. Upvotes: 2 <issue_comment>username_5: Sometimes it happens that theory is advanced using methods that are not of common knowledge among empirical researchers. McElreath and Boyd write in their [book](http://xcelab.net/rm/book/): > > Imagine a field in which nearly all important theory is written in Latin, but most researchers can barely read Latin and certainly cannot speak it. Everyone cites the Latin papers, and a few even struggle through them for implications. However, most rely upon a tiny cabal of fluent Latin-speakers to develop and vet important theory. > > > Replace Latin by Maths and you will have a fairly good description of some social sciences and evolutionary biology. For example, I have lost the count of how many Sociology empirical studies about Becker's new home economics which try to test hypotheses attributed to him, but who seem to be based on a misunderstanding of the mathematical models in his writings. This divide between empirical and theoretical work is caused by a lack of basic mathematical knowledge by empirical sociologists. Upvotes: 2 <issue_comment>username_6: > > Are other disciplines facing a similar problem, and how do they deal > with it? > > > There is a recognized chasm between research and practice within information systems research [1]. Here is a quick summary of some of the ways that scholars are trying to deal with it (or at least the solutions they are proposing): 1. Joint University-Industry Appointments where universities appoint expert practitioners to paid part-time university posts [2] 2. Focus on using better forms of dissemination than research journals, for instance websites with actionable content [2] 3. More systematic reviews of literature in the same way as the Cochrane Collaboration does for medical research [2] 4. Ensuring that editorial boards and program committees have equal representation from academics and practitioners [2] 5. More action research; research where practitioners and researchers work together to test and refine principles [2] 6. Applicability checks - e.g., academics should assess whether research relevant to practice to determine if they should write it. They do this using focus groups with practitioners [1] 7. IS researchers should develop closer links to business and technology, for instance; (i) conducting sabbaticals in corporations, (ii) having industry-based projects for students (iii) encouraging internships for junior faculty, (iv) doing business consulting, and; (v) building partnerships and alliances with business groups [1] 8. Universities should improve IS faculty levels of practical knowledge by (i) re-evaluating tenure criteria, (ii) realigning faculty reward processes, and (ii) changing standards for evaluation 9. IS researchers should involve practitioners directly in research, (ii) actively seek problems from practitioners [1] (iii) run more surveys about practitioner issues to develop a research agenda that aligns with practice [1] 10. Universities should revise Ph.D. Program Requirements to (i) require at least a minimal level of business experience and managerial involvement as a requirement for admission or as a supplemental part of doctoral programs, (ii) adopt a strategy where interdisciplinary dissertations and studies of actual business practices are viewed positively within the dissertation process (iii) develop business experience of students in doctoral programs [1]. 11. Universities should form partnerships with professional and discipline-based organisations: "Individual schools, and the AACSB, should encourage disciplinary-based academic organizations to include practitioners in their annual meetings to help define new issues of which the membership would be aware" [1] 12. Researchers should produce short and concise research reports (ii) use traditional practice reports,management briefs,white papers, and the Internet to disseminate research where possible, (iii) produce special issues on topics of interest to IS managers and hold themed conferences with both academics and practitioners, (iv) publish online to shorten research to publication times and (v) write practice orientated books [1] **References:** [1] <NAME>. and <NAME> (2008). "Toward improving the relevance of information systems research to practice: The role of applicability checks." MIS Quarterly 32(1): 7-22. [2] <NAME>. and <NAME> (1999). <NAME>. "Building links between IS research and professional practice: improving the relevance and impact of IS research." Proceedings of the twenty first international conference on Information systems. Association for Information Systems, 2000. Upvotes: 0
2015/12/26
1,513
6,463
<issue_start>username_0: I am teaching my first non-examinable graduate course this Winter term. This course is for students ranging from Masters' students to PhD students in Pure Mathematics and is a course for students to see a topic that is a bit more specialised than the typical qualifier type course. Since it is a bit more specialised, I am having a bit of an issue figuring out where to start. I can start with a bit of background at the risk of boring some of the more advanced PhD students working in my field, or I can tell the students that are not as adept where to find that material to catch up. I am leaning more towards the latter so that I can have more students taking the course, but the course is only 16 one-hour lectures and I want to get to some interesting material that can help someone starting to work in this field. I am also concerned I might be being overzealous in what I can accomplish in this time and might speed right through details that would be useful. (My imposter syndrome is acting up and making me feel like a lot of this stuff is trivial when maybe it's not!) The question is this: **How do you in a topics course balance giving general theory and also the more specific information that is relevant to the current research trend?** I'm afraid if I spend too much time on general theory, I may end up not touching on what research is being done today. Would this be bad just spending the last lecture or two in a topics class just punting the question "What's happening nowadays in this field?" Thanks in advance!<issue_comment>username_1: One concept/approach, which is what I try to keep in mind in addressing a heterogenous audience in a not-required more-advanced course, is exactly to not try to be "systematic", much less "complete". As noted in the question, it would be easy to use all the available time for "general theory", without ever getting around to the points of interest that motivated the course in the first place. So, instead of "laying a systematic foundation...", I try to get to interesting phenomena as directly as possible, and then back-fill "general theory" as relevant. For that matter, "general theory" is more widely available in textbooks and on-line notes than are interesting examples (since the latter tend to be written in "research papers" aiming at more-expert audiences), so students can more easily do further back-fill themselves if desired. For that matter, some "general theory" is somehow over-hyped as a big deal, when, in fact, the introductory parts of it are really very elementary. For example, in an introduction to modular/automorphic forms (my interest...), one *could* position the thing as having prerequisites of "Lie theory", "algebraic number theory", "representation theory", and/or "algebraic geometry". It is certainly true that all these things are relevant and useful, but (I claim) it is *not* the case that one must have "systematically" studied all these things *prior*... For example, "Lie theory" at the entry level can be simply a study of behavior of two-by-two real and complex matrices. "Algebraic number theory" at a basic level is just about integers, or perhaps Gaussian integers, and so on. "Representation theory" (infinite-dimensional unitaries, etc.) relevant to automorphic forms will not even be *found* in "basic" representation theory books (which tend to do finite-dimensional and more combinatorial things), but can be approached in a down-to-earth way (for real rank-one groups) as Bargmann and Wigner did, by looking at asymptotics of solutions of second-order ordinary differential equations. (Indeed, the later work of HarishChandra and Casselman-Milicic might be construed as adapting those classical asymptotic techniques via Deligne's PDE version of the old ODE asymptotic results, as in the appendix to the C-M paper about it.) In particular, to my perception, far too often students think that they have some moral-professional obligation to be "completely expert-prepared" in the alleged background material before starting the next stage... which is (to my perception) substantially misguided, since that way one has no inkling of what the use of the "background" might be. Trying to short-circuit this impulse a bit by directly drawing attention and explicating the applications is a good anti-dote. And, to repeat, one has no obligation to "prove everything in class"... That's a recipe for not getting anywhere. Upvotes: 3 <issue_comment>username_2: You do need to balance, and this balance will depend on several factors, primarily (i) your goals for the course, (ii) student needs, (iii) student backgrounds, and (iv) subject material. Of course (i) should take into account (ii) and (iii), and you can get a better sense of (ii) and (iii) by talking to other faculty and students beforehand. Some advanced courses go through and prove everything or almost everything in detail assuming certain prerequisites and just get as far as they can, whereas others are more "seminar style" and just talk about ideas with little details. Most are probably somewhere in between. All these types of course can be useful but they meet different needs of students. My main suggestion is to figure out, as concretely as possible, (i) and (ii). Then you can try to prioritize what are the most important versus least important things for the course, and how you can get there given (iii), i.e., how much theory/detail you need to/can skip. One thing I try to do is set formal, concrete prerequisites (which I try to make as minimal as possible, but if someone wants to take the course without the prereqs, I tell them to read so-and-so before the course starts). Then I type up "appendices" or "surveys" to summarize additional preliminary material you need in the course. (Typically I won't type up proofs, but will include some examples and try to give them intuition, and include references for complete details. Usually I will lecture on most of this material as well. Depending on the course, this might come at the beginning, middle, end, or be spread out in various supplementary modules.) This way, the students who haven't learned this material already can still logically follow the course, taking these additional materials for granted. (If you're stupid like me, you can also try to type up notes for the whole course and make them more complete than your lectures, but this take a *lot* of time.) Upvotes: 2
2015/12/27
2,327
9,999
<issue_start>username_0: Suppose that our friends Alice and Bob are both academicians (*academician*: a member of the faculty of a college or university) in universities A and B respectively. University A requires a lot of teaching and administrative work, whereas university B requires a certain amount of research and only a small amount of administrative stuff. Alice enjoys research very much but she cannot spend enough time to conduct research and be productive in terms of publications. Bob, on the other hand conducts research since it is part of his job and does not really spend effort to improve his projects. Let Charles be a member of the admission committee in university C that has an open PhD position. If both Alice and Bob applies to the position, he will see that Bob has been involved to more research projects than Alice, moreover, Alice has no publications and has spent her time on teaching. The thing is, Alice does teaching really nice and gets positive feedback from both professors and students. It is clear that Alice can do her job above-the-average. Bob, on the other hand does not even need a feedback. He proved that he can do research by his projects and publications. Suppose that both Alice and Bob are started working as faculty when they were master's students and now they are PhD students who want to find a position in a different university because of their own reasons. Considering that both Alice and Bob are currently faculty members, Bob is one step further from Alice in terms of getting the PhD position, if not many steps. What can Alice do to prove that she can be a way more better researcher given time and opportunity? Is there any "Charles" who is ready to give that chance? If you are "Charles" (a member of an admission committee), would you give that chance to Alice? If you would, under what conditions?<issue_comment>username_1: On your last question: in the US, it depends very much on the institution. Research-heavy universities are going to pay attention to your publications and ignore your teaching. They will tell you that your teaching matters, but if you have good publications, nobody will care that you've never taught a class... whereas the best teaching evaluations won't help you get the position. Once you start as a professor, your optimal strategy is to spend just enough time on teaching so that students won't complain to the department chair. There's a lot wrong with that, and if you enjoy teaching you may want to spend more time on it. But at the end of the day, you don't get non-renewed for poor teaching evaluations when you're a productive researcher, whereas the most glowing teaching evaluations won't help you get tenure if you don't meet the research requirements. Teaching colleges care less about research and more about teaching, so there are places that value it. But these institutions, on the whole, are a lot less prestigious and pay substantially less. (There are exceptions, of course.) I'll let others answer your question about PhD studentships, as I'm not familiar with that system. However, I do want to note that doing good research is often more about putting in the hours than being particularly brilliant. It seems to me that just about anyone in any field, after completing graduate coursework and reading 100 papers, can think of something that is new and interesting. It's finding the time to read those 100 papers (and to sit down and think about extensions) that is difficult. So your first goal should be to prioritize: less time on teaching, more time on research. Teaching can take up your entire day if you let it -- and it can be very rewarding. But if your goal is a research-oriented position, then you can't allow it to do so. Upvotes: 2 <issue_comment>username_2: "What can Alice do to prove that she can be a way more better researcher given time and opportunity? " How do you know? From your description, Alice pines to do research but has absolutely nothing to prove that she is a better researcher. Teaching and research are entirely different skill sets; being good at one does not mean being good at the other! She has allowed herself to be stuck in a mainly teaching role, rather than a research focused role. Universities don't just want researchers, they want sharks; people that know exactly what they want and go after it. Alice making excuses will not make her appear to be a shark. If Charles is interested in hiring a researcher, why would he ever choose someone who has, from your description, absolutely no experience in research versus Bob who presumably has a track record? That said, if Bob isn't a shark either but just kind of drifted from lab to lab and acquired the 15th name on a couple of papers along the way, I can guarantee that they aren't going to be too interested in Bob either. If Alice wants to do research, then she needs to make some fundamental changes in her life. Her best bet is probably to carefully read her contract with regard to responsibility hours then go to her department and demand a change in her schedule to allow her greater time for research. Then she needs to buckle down and produce and publish. Alternatively, she can try applying for places and say that she is looking to break away in order to do research, but again research is expensive and she will really need to sell whatever experience she has in order to convince them that they should take the risk on her. I'm genuinely sorry to have to be blunt like this, but I see no other way. I think there should be far more appreciation for teaching oriented professors in a learning institution. Instead they get some lip service and little else. But you came here for truth, and this is the truth as I know it. Nobody will care about what Alice might do if they see absolutely no background that supports such assertions. On a personal level as an example: when going for the NSF graduate fellowship, I didn't receive one even though the reviewers stated that the idea was great, the proposal well written, and the research seemed reasonable and doable, because I hadn't published enough papers. As an undergrad, I had published three. That wasn't enough though, apparently. Nonetheless, it hammers home the point that it didn't matter how bright I made the future sound given that my past wasn't up to snuff. Upvotes: 4 [selected_answer]<issue_comment>username_3: *I think a lot of confusion in the question comments stems from Alice and Bob both being called "faculty" employed at universities while still looking for PhD positions. I am interpreting the situation as follows - if this is fundamentally incorrect tell me, and I will update or delete:* **Both Alice and Bob currently hold a bachelor or master's degree only. Yet they are employed in some capacity by presumably lower-ranked universities. Their job title may be something like assistant professor and they are considered faculty with job security, but the job is essentially a permanent teaching / research assistant position. They currently are not PhD students or on any sort of track to a "better" position, so they are considering applying to a PhD programme in a different, presumably better university, likely in a country that it not familiar with their current situation.** *(this understanding is based on a few incoming students from East Asia that were in similar situations before coming to our university - it used to confuse the heck out of us when they said that they were permanent faculty back home, but the key to understanding the situation is that in some weaker universities in Asia one can essentially be a permanent research or teaching assistant, which are officially called something like "pre-PhD professors")* With this understanding, I think @username_1's answer is out of scope - it focuses on academics applying to *faculty* positions, not people that are currently on a faculty position of sorts which are now applying for PhD student positions. --- After this lengthy introduction, my answer is to a large degree similar to the answer by @username_2. > > What can Alice do to prove that she can be a way more better researcher given time and opportunity? > > > At the end of the day, **Alice does not yet know that she will "be a way better researcher" than Bob**. The fact that she gets great feedback on her teaching shows that she is good at teaching. It does not mean that she will certainly be a great researcher. As you say yourself, Alice is indeed one step farther "away" from a great research position than Bob simply by the fact that Bob already is on a research position (and, presumably, is currently learning the trade and improving, even if the position may not be optimal) while Alice in her current position is not. > > Is there any "Charles" who is ready to give that chance? > > > I assume Alice's best bet is to apply to PhD student positions with a stronger teaching focus. For such a position, her current CV makes her a competitive candidate versus Bob. At least in central Europe, such positions *do* exist. I know that because I had a position like that for the better part of my studies. Another example is the web page of a [former colleague of mine](http://www.infosys.tuwien.ac.at/staff/mvoegler/) who is doing a PhD financed via a lot of semi-independent teaching (note the job title "University Assistant" - this is what this is called in Austria). However, there are two further challenges: (1) for such PhD student positions, strong knowledge of the local language is often required for undergrad teaching, and (2) those positions are not numerous and they are often not widely announced. > > If you are "Charles" (a member of an admission committee), would you give that chance to Alice? If you would, under what conditions? > > > Only if I have a position that requires a lot of teaching. To be blunt, I see very little reason to hire Alice on a pure research position over Bob. Upvotes: 2
2015/12/27
744
2,979
<issue_start>username_0: I have an affinity for Germany, and I've heard a lot about the growing value of international experience, so I'm entertaining the possibility of applying for an MS at a German institution. However, I'm afraid that leaving the US to study aerospace engineering is a bad idea because the US leads the aerospace industry (NASA budget > ESA budget, best aero university rankings, US institutional inertia, etc). Thus, is it impractical for a US undergrad in aerospace engineering to pursue an MS in Germany? I don't think it's impractical if I'm looking at the complete package of new knowledge/research, new engineering outlook, and cultural diversity, but I'd like to hear y'all's thoughts. I just learned about the DLR's masters programs, so keep 'em coming!<issue_comment>username_1: Absolutely yes, it is practical and No it is not step back. I think your fear is based on anecdotal experience, you met somewhere some student and they express doing MS in USA?? I dont think so, most students go to study PhD in USA not MSc, primarily because it is easier and less stressful ( In germany, you get a working contract and have specific amount of years to finish 3years or rarely 4) Also MSc in germany are free, so I cannot understand logic behind going to do MSc in USA. educational and research part depend on institution that you choose, but other positive aspects could be living in totally different part of the world and finishing studies can maybe give you some skills that you wouldnt have if you stayed in USA ( Honestly, I cant tell you any particular one, for me MSc was just continuation of BSc degree, but surely in today's world this kind of mobility is favorable) EU invest a lot of educational budget into mobility schemes (Erasmus+, erasmus Mundus, CEEPUS, WERAMED...etc) Upvotes: -1 <issue_comment>username_2: Why would it be a step back? After all, a large part of the Rosetta/Philae mission last year was planned and executed by German researchers. You can just apply to any MS program (or just to the programs which are held in English if your German isn't good enough) and get a decent MS over here, but if you want to pursue a specific goal and do a really great MS, you should scout out MS thesis opportunities before you come. Find a professor at a German university whose research you're interested in and apply for the MS there. Or line up a thesis opportunity at one of the research organizations and then apply to the MS program that the institution is affiliated with. For example, the DLR (German Aerospace Center) offer [MS thesis topics](http://www.dlr.de/dlr/jobs/en/desktopdefault.aspx#StudienAbschlussarbeit/S:133). "every European student I met at an AIAA conference" is a very small subset of every European AERO student. And they've probably heard a lot about "the growing value of international experience", too, so it's natural for them to want to come to the US since they already speak English. Upvotes: 2
2015/12/27
963
4,068
<issue_start>username_0: I am considering submitting a single author paper to a computer science conference that would be very expensive for me to attend based on travel costs alone. If accepted, is it realistic to expect that I could find some grant or other means of covering the entirety of the cost? I really cannot afford it on my own. The conference has a student volunteer position which I will definitely apply for, but it does not cover travel costs, only registration costs. Also, the project is not quite finished – I anticipate a couple more months of work.<issue_comment>username_1: Many universities have funds specifically for student travel to conferences. (For example: [University of Washington](https://www.washington.edu/undergradresearch/students/funding/urcta/), [Wayne State University](http://urop.wayne.edu/travel.php).) Even if there is no special fund at the university level, your department may be willing to sponsor all or part of the trip. While it's more common in my experience for student travel to be funded directly by their advisor from their research grants, my department has, on occasion, been willing to pitch in (e.g. under circumstances where a student has no advisor, or the advisor can't sponsor the trip.) Many ACM SIGs (special interest groups) and IEEE societies offer student travel grants including travel costs (up to some amount) for their conferences. Many conferences in computer science arrange student travel grants from a government funding agency and/or industry sponsors. So you might consider submitting to one of those conferences instead, if there is one relevant to your field. There are also special grants for minorities, e.g. this one from [ACM's women in computing](http://women.acm.org/scholarship). Regardless of the funding issue, I encourage you to find a faculty advisor to give you feedback on your work. (Also, every travel grant *I* have ever applied for required a letter from an advisor.) Upvotes: 4 <issue_comment>username_2: Are you planning to apply for graduate school in the fall? If so, I'd definitely submit to the conference. There are some potential avenues for funding (see below), but even if you have to pay for it out of pocket, it's likely worth doing. You may end up with an extra $1,000 in student loans, but if that increases your chances of getting into a good program, it's money well spent. Conference attendance at this stage is an investment. Possible sources of funding: 1. Check your college's website for travel grants. There's unlikely to be anything for undergraduates, but you should check to make sure. 2. Are you part of any honor societies? Sometimes, they have funding for their members that you can apply for. 3. Ask your undergraduate adviser if they have some funds for student development or know of any available travel grants. 4. The conference may offer travel funding for minorities. Although in my experience that tends to be available to graduate students, not undergraduates. 5. Is there an office promoting diversity on campus? They may have access to resources for minority students and some of them may be discretionary. Doesn't hurt to ask if they can pitch in and subsidize your conference. 6. Once you have ruled out those 5, contact the dean of your college. My very first conference was funded out of the dean's discretionary budget. It's not that common for undergraduates to present their own research and I'd think universities would generally be supportive of that. 7. If that also fails, you can try a long shot and contact the research office on campus. They usually promote faculty and graduate student research, but they might be interested in promoting undergraduate research or getting a small press release out of it. Doesn't hurt to ask. I would strongly advise against adding a co-author just for conference funding. Having a paper on your own at this stage is quite valuable as it will set you apart from other applicants. Don't give that up for pocket change, even if the amount right now seems like a lot. Upvotes: 3
2015/12/27
999
4,363
<issue_start>username_0: I am a PhD student in Management, at Autonomous University of Barcelona. While working on my thesis, I found that there is no piece of code or software related to a specific but important topic of efficiency analysis and benchmarking. I am really eager to invest part of my time and write a package in R in order to help other researchers as well as deeply learning the topic through coding. Moreover, when I publish my paper I can motivate interested researchers to use that piece of code and re-evaluate my work or extend it. Now my question is whether writing such package equivalent to a chapter of thesis /a scientific paper. Based on the answer, I will decide whether to work on it right now or to defer doing so. I will be grateful for any help you can provide.<issue_comment>username_1: Writing useful software is not *equivalent* to writing and article or a thesis chapter (which are generally not equivalent to one another either). It may, however, be equally valuable for your career, depending on what you want in your career. The real question is whether people will find it highly valuable. If they will find it valuable, then it can be worth something to you. If you want it to readily count towards a conventional academic career, then after the software is done you can write a journal article about it, and people can cite that. [If you search for "analysis software" in Google Scholar](https://scholar.google.com/scholar?q=%22analysis%20software%22), you'll find many highly cited works of this sort, showing that it is certainly possible, even if not the straightest and most conventional road to take. Upvotes: 5 [selected_answer]<issue_comment>username_2: Writing software is not equivalent to writing a thesis, because the thesis is a theory defended by a practical demonstration. It could be equivalent to the *research* *project* about which a thesis is written. Any novel elements could be the parts presented as the question to prove. (algorithms, architecture, or even methodology) (Your committee may actually allow you to pretend otherwise, but that really is more of a non-thesis option if you never actually write a thesis and defend it.) The classical approach to this question is to write up and present a research proposal to your panel. If they accept it, it becomes your official thesis topic. Obviously, as already suggested, most bounce the ideas off of their adviser and peers before the formal proposal. Upvotes: 1 <issue_comment>username_3: Writing an R package is not necessarily the equivalent of a PhD thesis chapter or a journal paper. The best to answer those questions would be your current PhD advisors. However, for your own career it all depends on what will be the respective quality, impact, or influence of your R package vs. your completed PhD thesis. A successful R package does give one incredible credibility within the quantitative fields, sometimes even as much or more so than a PhD. Let's put it this way if two PhDs are pretty much equal in every respect but one has demonstrated a superior knowledge and impact on the R community with an innovative and successful R package related to his field of expertise; the PhD with the R package will be considered a lot more valuable than the one without. Given that, I don't think your question is either do this or do that. I think you should do both. Complete your work to earn your thesis in the most traditional way. But, also take the extra effort to develop that R package in a manner most relevant to your field with also general application. This is going to take you more time. But, it is so worth it. If you don't do it now when you have the liberty of contemplating budgeting the time for it, you may regret it later when professional time pressures may prevent you to ever again reopen this opportunity. Upvotes: 2 <issue_comment>username_4: As a PhD advisor I would count R package as a journal paper. At least if I would have to make an evaluation of your thesis :-). The answer, however, depends on your university. Why don't you write a package an publish it in a journal? Using it as part of your thesis is of course permitted. And if you get other users on board, everyone can see the importance of your work. Although this is not established as a standard, I would encourage you to do so. Upvotes: 2
2015/12/27
1,559
6,925
<issue_start>username_0: The issue of authorship is often a thorny issue in academia, so I’d like to ask for your insight into the authorship in the following situation: My advisee’s MA thesis is mainly and originally based on a new idea from me. We worked together to refine the idea and turned it into a thesis. I worked closely with him, spending 6 hours per week meeting with him to examine the collected data (he spent about 4 hours per week for data collection). Finally, he completed his thesis with a lot of work and graduated. After his graduation, I approached the student and asked him if he would be interested in publishing the study. He declined this offer mainly due to his availability and due to his work but appreciated this possibilities. Seeing the potential of the collected data, I decided to do this alone. To make the study publishable, I substantially revised more than 80 % of the student's work, using a different analysis method to recode the data and revamping the introduction, literature review, discussion, and conclusion sections. In other words, the revised paper, albeit being based on the student’s raw data set, is already substantially different from the student’s original work; the rationale is different and the focus is also different (just to appeal to the scope of the journal of our interest and the readership of the journal). The revised work was then submitted to a respected journal. I took care of all of the responses and revisions (three rounds of major revision + two rounds of minor revision), and the paper finally got accepted. Of course, I asked the student to read the paper and he liked/approved the paper. I listed my student as the first author and myself as the corresponding author. But given my student’s contribution to this publication, is he qualified as the first author? Or should he be listed as the second author and myself as first author? What should I do so that I can strike a balance between the ethical code and the effort I have put to this paper?<issue_comment>username_1: There is no hard and fast rule about what exactly qualifies a person for first authorship, which is why it is so often a thorny issue (except in those disciplines where the convention is to always list people alphabetically). It is also often highly dependent on the particulars of a given situation. For example, I have had situations like this a number of times and in some cases have ended up keeping the student as first author and in other cases ended up replacing them. Accordingly, the principles that I would go by in making such a decision are, from most important to least important: 1. **Did you previously make an agreement about authorship order?** If so, you should almost always respect that agreement. Renegotiating, even if things have changed significantly, is likely to be coercive to the student given the difference of power between you. 2. **How much work was the non-writing effort?** Advisors are often much, much better at writing than students, and 80% of the writing may be far less than 80% of the work that went into a paper, particularly if the advisor's writing focuses on the "supporting" material like references, introduction, discussion, etc. If the student has put in a lot of hard work, especially if there has been significant intellectual work, in gathering the data, then I give the student first authorship even if I basically wrote the whole text. 3. **Is the student interested in continuing in academia?** If authorship could be argued either way, and the student has any possible interest in continuing, it's generally better to put them first. It's a gracious act promoting your proteges and you still gain recognition as the advisor on the paper. In the end, it is likely that it will not be clear-cut exactly who should be first author. Personally, I prefer to err in the direction of giving a student too much credit rather than too little, but others may choose otherwise. Upvotes: 7 [selected_answer]<issue_comment>username_2: To me it almost sounds as if you are more or less *sole* author of this paper but should *most* definitely cite the thesis as the source of the data as well as marking as citations the 20% you retained. You make it sound like a separate pitch done without the original author's involvement even if the pitched material is the same, and the original author was not involved in any of the decisions involving this paper and particularly was not able to review or veto it. If he does not get an opportunity to agree with your approach, putting a paper out in his name seems inappropriate to me. To me, it sounds like citation would be more appropriate than attribution. Of course, sending a courtesy copy of the paper to him before publishing and making sure that he's fine with the approach you have taken for publication would still be a good idea. He can then decide himself whether or not he considers himself properly attributed, and then he won't feel slighted and/or cause trouble later. Upvotes: 2 <issue_comment>username_3: I agree with much of what's said in other answers. I want to add a few comments: Order of authors has different meanings in different disciplines. In some situations in some disciplines, the last author is the one who's most important. You and others in your discipline will have a better sense of the kind of meaning that others in the discipline read into author order. Graciousness is nice, but dishonesty is not. There have been discussions about practices in which people receive authorship as pure favors, or because they are funders, or lent equipment, etc. Some people argue that such "authorship", like being listed as the "author" of a paper ghost-written by a drug company rep (this happens) is a form of misrepresentation. There is ongoing debate about this issue, and some journals have adopted new, more stringent practices that partly mitigate potential problems. Your case is different, but the need to accurately represent authors' contributions is not so different. If it's OK for an advisor to graciously give a student higher prominence than is deserved, would it be OK for a student to graciously give the advisor higher prominence than is deserved? The power relationships are different--so maybe these are not the same thing--but the potential for misrepresentation is the same. Life is complicated. I don't want to claim that there is always a clear right and wrong way to present authors, and it may be that erring in favor of the student is a good idea within limits, but I did want to raise this issue. Why not add a note on the first page stating what each author contributed? This is what some journals now require (although often in such a vague way as to be uninformative). This wouldn't completely resolve the problem, because many people will still see the names in the citation without reading the paper. Upvotes: 2
2015/12/27
742
3,325
<issue_start>username_0: I have been approaching PhD applications under these assumptions: * The focus is mostly on the letters of recommendation. * Every admitted applicant has several letters that make a convincing argument that the he would be an extremely successful researcher in the given field. * These letters are only credible if they are from reputable professional researchers who know the applicant well. But how much evidence is necessary to show that the applicant would be a successful researcher? First, what does the recommendation letter author need to see in the student to make that argument sincerely, and how much interaction is necessary before the author knows the student well enough to make that judgment? Second, what evidence does the admissions committee need to read in order to believe that argument? (I am studying computer science in the US, if it makes a difference.)<issue_comment>username_1: * The author is supposed to be able to discern and determine some strengths from the applicant's personality and his academic procedure, could which considered as valuable points in her view point, to be known and succeeded as a graduate student. As an instance, the author might praise his teaching capabilities, based on his benevolence to help the other students to understand the academic subjects, in a better form. As an another example, she might assess him as an assiduous person, as he had summarized many articles to conflate his essay's introduction, systematically. More interaction she had with the applicant, more robust explanation she would present about the person... As a matter of fact, the author can focus on every positive aspect of the applicant and try to elaborate the details, reasonably... * Readers do not often expect any official evidence for the asserted points within the LoR. They would be convinced by the letter's credibility, if the author has shown the relevance between the level of the support for the person and the depth of the relationship with him. Obviously, knowing the author's identity would act as an indispensable factor to authorize the LoR and its content. Good luck Upvotes: 2 <issue_comment>username_2: Adding to @Mantinking's answer, here are some of the helpful insights. * *The focus is mostly on the letters of recommendation.* Your undergrad/master theses weighs the same amount of importance(even more in some cases) as the recommendation letters and you should have research experience in more or less a related field to the group you are applying. **A good recommendation stamps credibility qualitatively whereas a good master thesis(publications) communicates potential quantitatively.** * *Every admitted applicant has several letters that make a convincing argument that the he would be an extremely successful researcher in the given field.* Most institutes will specify the number of recommendation letters, *several* may be a bit confusing. From my experience, 2 would be sufficient in most cases. The letter would have additional impact if it comes from your thesis adviser or a co author. *extremely successful* is relative phrase, lets base it on the gross impact factor. I had only one recommendation from a *average successful* researcher, still got into a good research lab. I had a strong thesis though. Upvotes: 1
2015/12/27
1,800
6,947
<issue_start>username_0: I am a male graduate student in natural sciences at a public university. I want to simplify my life a little bit by wearing the same outfit (more or less) every day. (Obviously I would have multiple copies to be hygienic.) I tend to wear untucked casual button-downs with some casual khaki pants and sneakers. Would this be looked down upon by the faculty? I am not so worried about other graduate students, but I do not want to make a poor impression on the faculty. Any guidance on this would be much appreciated! Of potential importance: I am TAing one class right now and will be taking two classes this upcoming semester.<issue_comment>username_1: I think the social norm in the US is to never wear the same clothes (or in your case, clothes that look the same) on two consecutive days, especially if you want to appear professional (or what passes for looking presentable in the academia). That being said, if you don't see your advisor or students on a daily basis, you might get away with it, as people may assume that did change your clothes on days when they didn't see you. But really it is not difficult to have two or three different sets of outfit and "rotate" through them. As I've learned, people don't care about whether you washed your clothes before you wear them again two or three days later (as long as it isn't too dirty), as long as you change your clothes daily. An anecdote to illustrate how easy it can be: I once had a math professor who would wear almost the same plaid shirt everyday, except for the fact that all of the shirts are of different colors. I suspect that he just ordered every single color available for that one shirt he really liked. Upvotes: 5 <issue_comment>username_2: One of my faculty colleagues who just received tenure (at my r1 university) has worn basically the same outfit (same colored button-down shirt with khaki trousers) for the past decade - from graduate school to his current position (he may have indeed been wearing this his entire life, but we only have data for the past decade). I believe his closet only has one type of shirt and trousers. Aside from gentle ribs from others about his 'uniform', it's well tolerated by others as it's a minor eccentricity compared to some of the other faculty. Just be open to someone asking you if you: 1) have multiple shirts or a single shirt; 2) wash them regularly. **tldr:** If it saves you money / brain-cycles / spoons / closet-space to wear the same uniform clothing every day, do it. It worked for <NAME>, it can work for you. --- Fine print: There may be a gender factor as men in academia and industry who basically wear a uniform are well known but it is rare to encounter women who do the same. This requires more exploration. Upvotes: 6 <issue_comment>username_3: I think the standards in academia are much looser than the standards in the business world. So bear in mind that when you get out of school and get a job -- assuming you get a non-teaching job -- you're going to have to upgrade your wardrobe. Still, if people think you're wearing the same clothes for weeks at a stretch, some number will think that's distasteful. I think the simple solution is to have two or three different colors and alternate. Personally, I have several white shirts and several blue shirts, and several dark blue pants and several gray pants (and I think one khaki). And I just make sure that when I change clothes, I put on a different color than I wore the day before. Unless you have some reason why you want to look exactly the same every day -- you consider this an essential element of your personality or you swore an oath on your father's grave that you would always wear the same colors he was wearing on the day he died or whatever -- I just wouldn't. Many won't notice, and most who do won't care, but some number of people will thick you look grubby, and one of those might be someone whose opinion is important to you. Upvotes: 3 <issue_comment>username_4: Life can be simplified without resorting to wearing a set of uniforms. While uniform is the ultimate form of simplicity, when your are the only one doing it in a place that uniform is not a norm, people ***may*** perceive you wearing the same piece of clothes everyday even you change into a clean set daily. It's easy to explain it to an acquaintance, but difficult to clarify if a professor has decided that you're unkempt. They cannot really come up close and sniff you so visual clues are the only clues. We can get the best of both world by simplifying the wardrobe while keeping the maximal versatility. One of the common concepts is [capsule wardrobe](https://www.google.com/search?q=capsule+wardrobe). Here is an example: [![enter image description here](https://i.stack.imgur.com/28jNW.jpg)](https://i.stack.imgur.com/28jNW.jpg) By using different combinations one can get different looks from Mon to Fri with only a few shirts and a couple pairs of pants. Designing a capsule wardrobe does take time but it'd serve you well for a couple seasons or even longer. Tips to make one are available online. Here are some general points: 1. Plan for the shopping instead of base on impulse. 2. Decide on a set of color schemes that are easily paired, colors along grey, blue, and brown scales are a good start. I often pick one accent color (currently green) for more variations (e.g. a tie or socks with green patterns). 3. Start small. For a season, I found 3-4 tops, 2-3 pants, 2 pairs of shoes, and 1-2 jackets seem to do the job quite well. I don't wear suits for the current position. 4. Buy quality stuff, especially for coats, suits, wool sweaters, and shoes. It's actually not that difficult. I tried it last year. I cleaned up my closet, and then just find some scheme online that I like and modify them to the colors and style that I wanted. So far I can pick out my clothes without even thinking about it and I never felt I'm wearing the same thing day to day. Hope this helps! Upvotes: 6 [selected_answer]<issue_comment>username_5: I think there are two issues to consider here: 1. **Are the clothes clean?** That's the gold standard here. People might look more closely when you wear the same outfit every day, but if the clothes are well-maintained and don't smell (let alone stink), you should be fine. 2. **Do you own that you wear the same (type of) clothes each and every day?** Seeing it as "is it acceptable" (by others) ignores that your attitude influences how others perceive you and your clothes. If you are insecure about it, others will notice and it might become an issue. If you own it, I don't think you have much to worry about (and like others have written, some highly creative people did the same thing, so you're in good company). Hmm, also, [going by this PhD Comic](http://www.phdcomics.com/comics/archive.php?comicid=1161), there's also dressing for the job you want. ;-) Upvotes: 1
2015/12/27
288
1,214
<issue_start>username_0: Journals like CVPR (IEEE Conference on Computer Vision and Pattern Recognition) have a peer-reviewing process. Depending on the journal (or the publication published after a conference - is that also called a journal?) one might simply know that all articles were peer-reviewed. However, those reviews are not publicly available as far as I know. This is also a problem of arXiv / bad journals. They might have (many) good articles, but one has to check them. A work-around might be to look at citation counts / who cited the article, but citations are not necessarily positive. Are there any typical / recommended ways to write reviews and make them publicly available?<issue_comment>username_1: Depending on whether you want to leave short comments on papers you weren't invited to review or make public full reviews of papers that you were, you might look at <http://www.pubpeer.com> and <http://www.publons.com> respectively. Upvotes: 2 <issue_comment>username_2: Since you mention the arxiv, you might be interested in [Episciences](http://www.episciences.org/), which aims at enabling what you describe, although I don't know what the current status of the project is. Upvotes: 2
2015/12/28
1,061
4,372
<issue_start>username_0: In the mainstream publishing model, it can take more than a month from the time a manuscript is submitted to the time reviewers are assigned to review it. I feel this is a bit unfair as editors from these publishers generally require reviewers to submit their recommendation within a month from the time they agree to review. I believe the effort required to review a manuscript is much greater than the effort to find a reviewer. My question is: Why can it take so long to find a reviewer?<issue_comment>username_1: If several of the people the editor asks decline to review (and especially, if they [wait a while to respond](https://academia.stackexchange.com/questions/25932/can-someone-really-be-assigned-to-review-in-the-aps-s-editorial-system-for-over) and then decline) then it takes a while to assign reviewers. Keep in mind also that editors are typically [handling many papers simultaneously](https://academia.stackexchange.com/questions/9957/how-many-manuscripts-should-i-agree-to-handle-as-an-editor), which can be a lot of work in aggregate. In some fields, there aren't that many suitable experts, so it's hard to find enough that are willing to review. See e.g. [How long is too long to wait for a rejection because of a lack of reviewers?](https://academia.stackexchange.com/questions/20309/how-long-is-too-long-to-wait-for-a-rejection-because-of-a-lack-of-reviewers) Upvotes: 4 <issue_comment>username_2: There are several factors that can play into this: * Some journals have a strong editorial hurdle, i.e., a high desk-rejection rate, and the editor is expected to actually spend some time on the paper to judge its suitability for the journal. Thus editorial handling includes more than just finding a reviewer. * Some editorial systems only indicate that a paper is “under review” (or similar) when a reviewer actually agreed to review the paper, while others already do this as soon as the first requests have been sent to reviewers. * In some fields, it is common that reviewers blatantly exceed the allotted review time, so that the journal requiring their reviewers to submit their reviews within a month may still result in an average review time of two months or even higher. * Depending on the availability of potential reviewers for a given paper in the field, some journals follow a strategy of requiring reviewers to submit a review quickly, but take into account that many reviewers reject the request and they spend a considerable time finding reviewers. * Both, editorial handling and peer review do not take much work time in comparison to the allotted time (a day’s work or less in most disciplines). The main delay arises from the editor or reviewer having to find time for this given their other academic duties. Thus, if for example, the editor is on a conference for a week, they have to catch up with their regular work and the editorial work afterwards, which may amount to a significant increase of the editorial handling time. * Some editors are just slow. Upvotes: 6 [selected_answer]<issue_comment>username_3: As a relatively newfangled academic editor of [PeerJ Computer Science](https://peerj.com/computer-science/), I can tell you that actually finding suitable reviewers is **substantially** more difficult than what I used to give it credit for. For many papers that I get on my virtual desk I am not actually an expert, so I need to invest some time to look over the literature of the area to figure out who actually *are* suitable reviewers. My current "acceptance rate" for reviews is substantially below 20%, and it would be even much lower if I did not have a reasonable personal network (that is to say, at least 75% of all reviewers that actually accept are people I know personally). Approximately 50% of those that do not accept never respond, so you can't really tell whether they accept or not until you waited a week or so. And - particularly annoyingly - it is not unheard of that people accept a review but then never get back to you. All in all, I would say taking one month to find reviewers is long, but not outrageous. Small nitpick: > > I believe the effort required to review a manuscript is much greater than the effort to find a reviewer. > > > While this is certainly true, an editor also typically handles many more manuscripts than a reviewer... Upvotes: 4
2015/12/29
623
2,765
<issue_start>username_0: [This question](https://academia.stackexchange.com/questions/41687/what-is-behind-the-indian-undergrad-research-experience-spam) asking if the subject of current question is spam or not serves as the motivation for further, seeking solutions. As can be seen from the answers and comments, the emails from undergraduate students seeking a short term internship in *en masse* can be identified as a legitimate problem faced by many in Academia. Most of the emails have a common design or made out of same template and an equivocal tone. Even the researchers who doesn't even started a group were targeted asking them to consider the application to join the group(which doesn't exist) and claiming that the applicant had *read* the publication(doesn't exist either) on so and so topic. The epicenter of these mass outbreak is a *particular country* according to the OP and the answered. Some pointed out that most of it are indeed from legitimate students but overly opportunistic and some pointed out the reason why. If this is indeed a growing trend and a real issue, * How can the target weed out the emails considering that some applications are indeed *real*? * Taking into account that this may harm the chances of talented students and may form a negative general opinion(**?**), what measures can be taken to make the students aware that sending out emails in mass is not a professional way to attain a position?<issue_comment>username_1: What I do is to provide on my web page a very explicit set of directions for how to contact me about possibly doing any kind of work with my group. I have separate sections for undergrads, grads, and postdocs. For undergrads, I explain a bit about what we do in my group, describe what undergrad research in the group is like, ask them to answer a few questions about what draws them to our work in particular, what they hope to gain by working with us, about their skills and background, and about their general interest and experience reading primary literature. I also ask for a transcript. You could ask anything, though. With something like this in place, you can either ignore with good conscience any emails that fail to address these questions, or you can send a brief form-letter response asking that the applicant address these questions in order to be considered for a position. Upvotes: 4 [selected_answer]<issue_comment>username_2: In many email clients, you can set up rules for where the incoming messages would appear. You can set up a rule that will send a message with a particular set of phrases directly to the thrash/spam, which would at least filter out all the template based messages. You might be able to also set up an automatic response to such messages. Upvotes: 2
2015/12/29
4,773
18,617
<issue_start>username_0: I have been supervising a female PhD student for a couple of months. She is the first female PhD student I am supervising and got the position on merit. My view is that her gender does not/should not change anything in how I supervise her or what I expect from her, the rationale being that doing so might ultimately hurt her in her post-PhD career. For this reason, I have not brought up her gender in any of our discussions. I briefly contemplated telling her that I will treat her the same way as her male peers, but did not do so because it seemed wrong (as in "of course, why is he telling me this?"). Lately I have been wondering, however, whether there are things I should do that I might not even be thinking of. Perhaps my own experience and the male-dominated environment blind me so that I do not perceive my own sexist/reverse-sexist attitudes and ultimately do not do things that I should be doing (and vice versa)? So I am interested in reading advice/views on supervising female PhD students (in male-dominated academic environments). A few notes to address anticipated follow-up questions: Yes, I am male. I did not specify where my institution is located because I am interested in a range of opinions/comments. The agency that finances her project offers a special stipend to attend workshops/meetings for female students; I have encouraged her to attend.<issue_comment>username_1: > > My view is that her gender does not/should not change anything in how I supervise her or what I expect from her, the rationale being that doing so might ultimately hurt her in her post-PhD career. For this reason, I have not brought up her gender in any of our discussions. I briefly contemplated telling her that I will treat her the same way as her male peers, but did not do so because it seemed wrong (as in "of course, why is he telling me this?"). > > > That is exactly the advice I would give in this situation, so congrats, you've already figured it out. > > The agency that finances her project offers a special stipend to attend workshops/meetings for female students; I have encouraged her to attend. > > > Try not to *overdo* the encouragement. I get *so* much spam inviting me to assorted "women in engineering" events. On a related note, you might want to watch out for the possibility that you (or your department) might have a tendency to overuse the students who are "visible" minorities (race, gender) in publicity materials and outreach events. Some people don't mind (some even appreciate this), but some dislike being used as "poster children" to show how diverse the department is. (There's a joke in [this TV episode](http://scrubs.wikia.com/wiki/My_Fifteen_Minutes_transcript), where the African-American doctor calls out his college for overusing him in their brochure by photoshopping him into [the same picture twice](http://scrubs.wikia.com/wiki/My_Fifteen_Minutes?file=1x8collegebrochure.jpg).) I'm sure people must have published studies about how female PhD students are statistically more likely to have families and be concerned about work-life balance issues, and how you have to be more supportive, etc. But all that really just comes down to being communicative and supportive of your students, whatever their individual needs might be. For large numbers I'm sure there's a gender dimension there, but at the individual level it's just about being a good supervisor to an individual student. Upvotes: 8 [selected_answer]<issue_comment>username_2: In addition to @username_1's excellent answer, I would recommend one other important step to take: start following the blogs and/or other social media writings of some outspoken female academics. Despite best intentions, your perspective is likely to be limited in many ways simply because you are male and not female, and our media tends to provide us with a lot more male perspectives on science than female ones (Quick: name 10 people who write about science. How many of the people who popped into your head were male?) Explicitly adding more female voices to your media consumption is a good way to broaden your perspective and to decrease the likelihood that you will unintentionally do something problematic in advising your student. For a starting point, let me recommend a few semi-arbitrarily selected blogs that I find interesting: * [Scicurious](https://www.sciencenews.org/blog/scicurious) * [<NAME>](http://blogs.discovermagazine.com/science-sushi/) * [SoapboxScience](http://soapboxscience.org/) * [Zuska](http://thusspakezuska.scientopia.org/) Happy reading---and note that you can apply a similar method to broadening your perspective on other sorts of under-represented perspectives as well. Upvotes: 5 <issue_comment>username_3: Although the question has arisen with increasing frequency "how do I handle said female in x (male dominated) environment?", your approach to it is not common - and it fills me with hope for the future to read how you've handled it. You're doing what so few can figure out how to do - you're not treating her differently. You're holding your concerns about what that may or may not mean at bay. Most people do one of two things: over or under compensate, and it's my experience that in this time of growing awareness about gender imbalances in certain fields, that most people lean towards overcompensating. As a woman, I find that almost worse. When people bend over backwards to tell me how impressed they are that I'm breaking the status quo and enthusiastically express that we need to fix the gender imbalance, then do everything in their power to support me with so much emphasis on this issue, I start to wonder if I was ever really qualified in the first place. I start to question if I succeed because of my hard work and determination or because of my anatomy. It takes the joy out of all the wins, and I frequently feel like a fraud. One of the best examples I can provide is that I'm often approached at conferences by recruiters, and one of the first words out of their mouths are "we're looking to hire more women". Quite frankly, it's offensive. They're not looking to hire "more qualified professionals", or "persons with my particular skill set". They're looking to meet a quota, and not knowing anything about me, they still want to hire me because they can tell from a glance what my gender is. Now I understand there are good intentions there - they want to give me the opportunity to interview. An opportunity that women haven't been given as often in the past. But at this point, I know the opportunity is out there. I know many companies will hire you even if you're less qualified BECAUSE you're a woman and they're trying to prove just how progressive and PC they are to the world to improve their image, while others are simply motivated to fix the problem, but unaware of the best way to help. Ranting aside, this is what I would hope for, and what has always made me happy when encountered in past interactions: Be just as tough on her as you would be with your male students. She'll come out better for it. If she's good at her job because she truly earned her education, she'll blaze a path in the field that will change the minds of those dwindling number of sexist individuals she'll encounter in the workplace by the quality of her work. She'll inspire other women to pursue their passions because her intelligence and work ethic will speak for itself. If you cut her slack because she's a woman, you're simply raising false idols. Other men will dislike her because she's not as competent or qualified when she graduates and you'll reinforce existing sexist views, and women who wind up working with her that did climb over obstacles to get there won't respect her, and will consider her an embarrassment to the movement. Do your part by doing nothing. But if you see her stress, trip or begin to falter, do what you would for any other male student - check in. Mention your office hours, suggest peer study groups, and "catch up" alternate class times if you have other open spots. If she's worthy of her degree, she'll do what it takes to succeed. We'll catch up eventually both in numbers in the STEM community, and in raising our glass ceiling. All we ask is to have the same opportunities. Not a leg up to reach them. Upvotes: 6 <issue_comment>username_4: I agree that you should not treat her any differently than your male students, nor should you point this out to her. However, remember that her gender may affect how she is treated by others in your field (students or colleagues). If she raises concerns about sexism or harassment, above all, listen to her. Then find out how you can support her. As you probably know, Title IX applies to grad students and faculty, as well as undergrads. You may also want to pay a little extra attention to how others interact with her in seminars, research group meetings, and other professional settings. If you see things that concern you, be an active bystander, and let her know you've got her back. Upvotes: 4 <issue_comment>username_5: Keep an eye out for signs of [impostor syndrome](http://geekfeminism.wikia.com/wiki/Impostor_syndrome) and be prepared to counter it\*. Given that you're supervising PhD students, I'll note that this really applies to all of your advisees. However, it's more prevalent among members of underrepresented groups in any given field or community. \*I'll let others provide further advice on that latter part - I don't have any special knowledge or experience there. I'm happy to accept edits on that point, upvote comments, etc. Upvotes: 4 <issue_comment>username_6: One thing to watch is meeting dynamics. A level of assertiveness that would be seen as a good thing in a man may be regarded shrill or angry coming from a woman. Some of us don't care, but younger, less experienced women may try to get along by softening and suppressing their opinions. That may risk getting their opinions and ideas ignored. [Famous quotes, the way a woman would have to say them during a meeting](https://www.washingtonpost.com/blogs/compost/wp/2015/10/13/jennifer-lawrence-has-a-point-famous-quotes-the-way-a-woman-would-have-to-say-them-during-a-meeting/) illustrates what women do when trying to express opinions safely. All I can suggest is to watch the dynamics of e.g. group research meetings, and make sure that **all** of your students, including the woman, get a proper hearing when they try to say something. Upvotes: 4 <issue_comment>username_7: For me as a female grad student in a male-dominated field it was and is very important to meet female role models. And I started to be that for younger students. So my 5 cents are to introduce her to successful (and nice ;-)) women in your field if you happen to know some and if you can do so in a natural way. I also tend to have very empathetic (male) collaborators, while this seems to be less of a criterion for my male colleagues when choosing their collaborators. My experience is also that female students in such a field tend to need more encouragement (given the same potential/talent). For example I myself would never have started a PhD without the direct encouragement of my now supervisor (and now I LOVE research). The solution to this does not have to be gender-specific, I totally agree with "But all that really just comes down to being communicative and supportive of your students, whatever their individual needs might be." Upvotes: 4 <issue_comment>username_8: I understand I might have lost people wit the following answer. In short: [the dream supervisor has many skills](http://www.findaphd.com/advice/doing/you-and-your-phd-supervisor.aspx), a [specific role](https://academia.stackexchange.com/questions/11273/what-is-exactly-the-role-of-a-phd-advisor), yet ***one goal***: **elevate the skills** of a PhD student **to the level** where **she**/he **can fly away by herself**/himself, **despite** three types of **stereotypes**: his/hers, yours, those from your (her/his+you) environment. Warning: the following answer contains strong allegorical content. One can replace the two bird species by other animals, male/female, as well as other broad categories like majority/minority. This was partly inspired by: * a [Gary Larson's Far Side cartoon](http://www.armoton.com/farside/pic/122-quack.gif) (see below), * the French anime [Wattoo Waatto super bird](https://en.wikipedia.org/wiki/Wattoo_Wattoo_Super_Bird), whose music has been in m head during the whole vacations, and which depicts two fictional bird species (one being goose-like) to describe/critize human traits. It helped me to step aside the question, as done in some science-fiction or utopian texts. First, to limit the standard gender biases and stereotypes, try to view the situation from an other perspective: you are a duck, the field is 95% duckish, and you are supervising your first seagull student. Both are quite similar: they are birds, they fly and dive, they are webbed-toed. But one common trait, though less visible at first glance, is more important to scholars: they can travel thousands of miles (out of merit). The allegory is about focusing on the most important traits in academia, not the obvious ones that matter most in society. Yet, in this allegory, some birds are more familiar than others in everyday life. And the others are more prone to [songs](https://www.youtube.com/watch?v=ThWiS8tW1Uo) or [poems](https://en.wikipedia.org/wiki/The_Seagull_(poem)). So I am a duck, supervising a seagull student. The work place is 99% duckish. Ducks shake wings to other ducks, but they like to cheek kiss seagulls. There are other such "habits", that distinguish seagulls and ducks in workplaces. So I told my seagull student: "I do not really like to shake wings with birds, cause wings get dirty (and you can get bird flu), but this is a habit. So I now ask people if they want to shake wings or else. With those who dislike that, we can agree on a different sign. What do you prefer?". My intent is to show that seagulls can make their own rules in a duck world, slowly but firmly. When I have changed some of my unconscious duck manners, I can observe other ducks behavior: do they behaved duckish? do I feel this may affect the seagull's feelings? Then, observing a specific behavior (daily comments in public about the seagull's feathers), I can tell the seagull (face to face, afterward): "I have seen this behavior, it seems duckish and misplaced to me. If you feel like this too as a bird (or a seagull in first place), may I suggest you to talk to the duck face to face, and tell it what happened, how you feel, and what you would like in the future (tools from [emotional intelligence](https://en.wikipedia.org/wiki/Emotional_intelligence)). If you do not want that here, or cannot handle it right now, here is some help you can use: myself, or another trusty person (if possible, a senior seagull you know in [HR department](https://en.wikipedia.org/wiki/Human_resources). Just tell it. You should also know that even 'talking about how duckish ducks can be' can help". On my own, I did tell a follow duck how duckish it was with the seagull, from my point of view, not involving the seagull, and told the duck to think about it. I even had to make public comments about its feathers on a regular basis to ring the bell. It worked. Finally, I confess a little [tern](https://en.wikipedia.org/wiki/Tern) bias when I present scholar birds to the seagull (at meeting, conferences). Showing how other senior seagulls perform in the field can be important for identification and future positions. But warning: you can learn that maybe your seagull student self-identifies as an eagle. You should adapt. That is your duty as a bird advisor: help the bird students find their way. And for yourself: do the same with any other bird students, even ducks. [![Talking to a duck, <NAME>, The far side](https://i.stack.imgur.com/M3NnS.gif)](https://i.stack.imgur.com/M3NnS.gif) Upvotes: 1 <issue_comment>username_9: In addition to all of the excellent advice given in the other answers, you need to take extra care to not allow any hints of a non-professional relationship with her. For example, as a male, if I had a female student visiting my office, I would never close my door, even if she asked. We never speak about our romantic lives, even though we can certainly chat about hobbies or the news. If I'm accompanying her to a conference, I never go to meals with her alone, even though I would do so with a single male student. It doesn't even matter if I'm even heterosexual or not. But I'm in the position of power, and there needs to be no opportunities for even accusations of improper conduct. The stories of vulnerable women being taken advantage of by their professors, or flirtatious women winning favour the wrong way - society simply doesn't expect men to behave the same way. Being over-cautious now can head off career-ending complaints later. Upvotes: 3 <issue_comment>username_10: In anti-racism thinking, white folks have "white privilege." In this (by now somewhat dated) piece scholar <NAME> reflected on that privilege and what it means. <http://nationalseedproject.org/white-privilege-unpacking-the-invisible-knapsack> There are similar reflections on what it means to enjoy male privilege. For example: <http://amptoons.com/blog/the-male-privilege-checklist/> Setting aside racism and sexism can be quite difficult, especially for white guys. What's easier: becoming more aware of how race and sex can affect your own outlook. I've asked people, alone or in groups, to read this kind of stuff. It helps get people ready for the kind of "first woman student" or "first black boss" change you are experiencing. Upvotes: 2 <issue_comment>username_11: There are lots of good points made in the other answers. I have read them all, but didn't see this idea. I did see "don't tell her", but I would suggest you do tell her. The reason is that you want to learn and being male having worked exclusively with male students, you don't know what mistakes you may be making. That is your point, and your female student could help. If you tell your first female student she is your first, your intentions center on fairness and merit, and that she would be helping you improve by gently pointing out gender bias, she might be glad to help. Otherwise, if she does see bias (which you missed), she could easily assume that's just how it is, as though you already know. Upvotes: 3
2015/12/29
872
3,865
<issue_start>username_0: I'd like to apply for an MBA (sick of working in IT and want to change) but seemingly every university I look at requires multiple letters of reference, one of which from a current supervisor. Is there any way to get around this requirement? I have zero trust in my current boss, and given that writing the letter would take effort for no benefit to him (since I'd be leaving the company and he'd have to find a replacement), odds are a letter by him would work against me. Also, I'd really rather not owe him a favor. Former bosses/colleagues would be a safer bet, but I left my previous workplace about two years ago and haven't had any communication with them since (moving continents doesn't help either, I can't just invite them for coffee). As an aside, how come one can't get access to higher education (not just MBAs) on their merits alone anymore? When I applied for my undergrad degree acceptance didn't depend on who one was friends with.<issue_comment>username_1: Your "merits" consist of more than just your undergraduate grades. They also consist of your ability to work with others, your curiosity, your interest beyond just learning what's necessary for the test, etc. This is why we ask for letters of recommendation. As for who exactly writes your letters, there is more leeway. I don't know about MBA programs, but in mathematics we would usually be equally happy to get letters that do not necessarily include your current supervisor. I would imagine that letters from former supervisors would be just as acceptable. The point, however, is that your supervisors can speak to your work ethic and abilities in ways nobody else can, and consequently provide important insight to the committee evaluating your application that cannot be obtained in any other way. In particular, co-workers likely have a different perspective on you than what a former supervisor sees in you. Upvotes: 3 <issue_comment>username_2: I would like to start by saying that there is no way to get around the requirement of recommendation letters (at least not in my field (CS)). Most Universities would consider an application incomplete without required number of references, so it wont even get processed. And, most of the time you can get away with recommendation letters from other faculty members. You don't really need a letter from your supervisor. If it is easy to get, then it is an added benefit, but not a necessity. I am sure if you write to any of your former Professors, introduce yourself and remind them of the classes you took with them, they would be more than happy to write you one. Once they say yes, You should send them your current CV and any other resources that might help them to understand your accomplishments and career goals. In grad school it is not about who you are friends with, the school wants to make sure that you have what it takes to pursue graduate studies. And who can say it better other than the people who taught you/ you did your research with/ your supervisors. Upvotes: 2 <issue_comment>username_3: Your letters show, among other things, that you are good at establishing and maintaining professional relationships. Imagine what a request to have letters waived would look like to an MBA program. The first thing they would learn about you is that you can't name 5 people who would write a glowing recommendation for you. That's not the impression you want to make. Undergraduate programs care less about letters because at the end of high school, you haven't yet had a chance to establish a reputation. MBA programs, at least the ones worth attending, expect you to have substantial experience in the workforce prior to attending. It's not like objective measures (e.g. the GMAT or GRE) don't matter, but people skills are just as essential as figuring out how to read a graph. Upvotes: 0
2015/12/29
925
4,155
<issue_start>username_0: This is a quite broad question, to which I have some personal answers, but I think I need more different perspectives. I am not talking about evaluating a peer's submitted work as a referee, but about evaluating applications ranging from small grants or sabbaticals (which have to be applied to at least in France) to tenure or tenure-track hiring. Such decisions have a small or huge impact on both others career and the overall academic system (by giving certain incentives), and should thus be taken very seriously. On the other hand, the time we are able to use for such evaluations is limited. > > How should we deal with this opposing constraints (evaluation is serious matter but time is limited)? What tools and proxys should we use or avoid in the evaluation process? > > > I am **not** asking about how the system should be (I know my answer: more automatic funding, sabbaticals and pay raises to have less evaluations made more thoroughly), but about what one wanting to take evaluation seriously (as opposed to a formal game that only needs an arbitrary answer) can do in the current system. Actions that might imply change in the system are welcome, as far as they can be implemented individually and the plausible outcome is considered rather than the ideal outcome if everyone did the same thing. **Edit in view of comments** **Clarification about "the system":** if needed please precise in your answer the relevant bits of information about the evaluation system you are speaking about, as of course things evolve and vary from place to place -- but the broad principles should be applicable regardless of the fine details so please keep this as concise as is relevant. **Example of subquestions:** it has often been said that impact factor, or more generally the glamor of journal titles should not be used for evaluation. Is there a consensus about this? I don't know. They are used in practice, but are there good alternatives? Are there specific evaluation circumstances when one should use this proxy? What about h-index and other citation metrics? When should they and should they not be evaluated? The question is not so much about how much time one should spend (which is more or less a given), but how to achieve a reasonable result in that time. Also, what to do in a situation where one feels a reasonable result cannot be achieved?<issue_comment>username_1: This question is heavily opinion based, but I personally think that the system used in the British REF is pretty good: What are the three best papers the person has written in the last 5 years? Of course, the quality of a paper is also opinion based, but at least in mathematics you have measures like How many famous people have tried to achieve what this person does in this paper? Does the proof use methods which have previously been applied somewhere else? Does the result or the method have applications within or without mathematics? Upvotes: 0 <issue_comment>username_2: I'm only going to answer the first part of this question: > > How should we deal with this opposing constraints (evaluation is serious matter but time is limited)? > > > It is true that it is often rather time consuming to do these evaluations. But I think ultimately, you need to apply the *categorical imperative*. In other words, you need to spend as much time and diligence on evaluations as you would expect your evaluators to spend on yours. For a two-page application for a sabbatical, this may not be very much. But for someone applying for tenure, surely a thorough reading of at least the most important papers, along with some research about the impact of the candidate's work, is appropriate. This is thankless work, as you don't usually get a lot of credit for it. But it is nevertheless something that can't be taken lightly because of its outsize effect on the candidate and the importance of your evaluation. Consequently, if the case is not completely straightforward, if you find that you need to spend a couple of days on it, then that is what you need to do -- because you would your evaluators to do the same. Upvotes: 2
2015/12/29
5,022
21,037
<issue_start>username_0: this is about a phenomenon often seen, but I came across a probably more prominent example recently, so I thought I'd ask for the community's opinion. Here's **the situation**: In the past semester's exams there was a question in computer architecture asking for X, and I, and many other students, answered Y, which contains X and is mostly revolving around X, thus successfully answering the question (in our opinion at least). The professor rejected the entire question (20% of the grade) and at any attempt to talk to him he directed us to the sample answer, stating that we "are wrong", we "answered another question, not the one asked", and so on. The question wasn't even given partial credit, and that happens systematically in his tests. Another similar issue on the same test, was when a very few of us optimized a small bit of code in a way that would, industrially, be significant, and that part was rejected similarly, without any explanations, as "being entirely different". Explanations were also written on the test, so he cannot claim we did it without explaining. This seems to be a general policy of the professor, as numerical errors are similarly crippling to grades, and in the sequel lesson, of an amphitheater of about or more than 80 people (and of course a lot more don't even attend classes as they are not obligatory, but I don't know if they're better or worse than those who attend), only 11 had succeeded the exam. I would finally like to note that some of the stuff he rejects, he himself does in class, the most obvious being a computational error, and frustratingly he also did the optimization he rejected in the test. So finally, **the question**: I understand that professors have the right to grade based on their own criteria, but Is it justifiable to reject an entire answer because more was said, without even partial credit? Is it justifiable to reject an answer that gives something better than what is required, perfectly equivalent, even if relying on the student's intervention? (in this case, it would be a compiler optimization, thus not an entirely direct interpretation, but an elegant and industrially valuable workaround) Is it acceptable to direct students to example answers and not allow them to explain why they think their answer is correct? I'm also itching to see how we could deal with this, but that's not the main question so I leave that to your own discretion and benevolence to answer.<issue_comment>username_1: It depends on whether the original question or parts of it ask for some kind of differentiation. For example, if I ask for the best way to cook brussel sprouts, a general overview of cooking methods won't do. Even if you correctly say that sprouts should be roasted in almond butter, it makes perfect sense to deduct points if you also talk about deep frying, because describing other methods of cooking looks like fishing for partial credits in case your initial guess about almond butter was wrong. However, deducting points without explanation is wrong. If the class is large, the professor cannot discuss every single answer with every student, but if a personal discussion is impossible, then at least the example solution should contain comments concerning all problems which occurred several times. Upvotes: 2 <issue_comment>username_2: I know from personal experience that it's very frustrating not to get proper credit for knowledge that you think you demonstrated correctly, so let me start by saying I sympathize with your situation. Now let me address your questions: > > Is it justifiable to reject an entire answer because more was said, > without even partial credit? > > > To give a literal, completely general answer: obviously yes. If the correct answer is, say, one sentence long, and your answer quotes the complete works of Shakespeare that happen to contain that sentence, I personally would not give you any points (but I would still be very impressed :-)). Of course, that is an absurd example, but it illustrates the point that what likely matters to your professor (or if not, certainly what *should* matter to a reasonable professor) is that your answer **demonstrates an understanding of the topic the question asks about**. It is certainly possible, and happens pretty frequently, that an answer Y contains the correct answer X but by adding more irrelevant things demonstrates a **worse level of understanding** than just the shorter answer X. The way this typically happens is that a student doesn't understand the material well and upon being asked about a certain topic, regurgitates everything they memorized about the topic, not knowing which part is relevant to the question. Clearly that does not leave a very good impression. Now of course, the above does not address what happened in your specific situation. It is completely possible that your longer answer Y still demonstrated as good of an understanding of the material as the official answer X, and deserves to get partial or full credit; or not - you haven't given us enough information to say. > > Is it justifiable to reject an answer that gives something better than > what is required, perfectly equivalent, even if relying on the > student's intervention? (in this case, it would be a compiler > optimization, thus not an entirely direct interpretation, but an > elegant and industrially valuable workaround) > > > I think this question is too specific for anyone here to be able to answer without knowing more details. As I said above, the points you get for the answer should be correlated with the perceived mastery of the topic being asked about that your answer demonstrates. If including the optimization demonstrates that mastery, you should get the points. But I can imagine a situation where using the optimized method instead of the official method actually demonstrates *less* mastery, for example if it appears that you used the optimized method to avoid revealing that you don't know the simpler, less sophisticated method the question asked about. In that case, it may make sense for your answer to be rejected. > > Is it acceptable to direct students to example answers and not allow > them to explain why they think their answer is correct? > > > I think it may be acceptable in certain cases. For example, if you have already asked many questions of the professor, argued with him in a way that he thought was unreasonable, overly argumentative, or delusional, and if the professor is very busy, for example is also teaching a class of 500 students in addition to the smaller class you're taking with him, at some point he may decide that spending more time discussing your exam with you can limit his ability to perform his other duties effectively, and be a disservice to the other students. --- **Summary.** As you can see from my answers above, whether the professor's behavior is unreasonable depends on many details that you haven't provided. It's certainly possible that it is unreasonable, but it's also possible that it is less unreasonable than you may think. Statistically speaking, I've found that students often overestimate the extent to which they understand the material and the degree of correctness of their exam answers. That doesn't mean it's what happened in your case, but it's a possibility you should keep in mind as you consider your next steps. In any case, good luck! As for what steps you should take to address the situation, this answer is already quite long so I think it makes sense to save that for a follow-up question if you care to post one. Upvotes: 8 [selected_answer]<issue_comment>username_3: I had a professor like this when I started my CS degree - I hated him, three semesters in a row, as I struggled to keep my GPA afloat because of his grading style... I even got into a verbal argument with him once that left me red in the face. Then I was grateful. Your post begs the question: Are you in it for the degree or the knowledge you obtain from going? Your professor, although hypocritical and imperfect at times himself, is holding you to a higher standard. He doesn't want a "good" answer, he wants the "best" answer, and as such, he will craft you into more than just a "good" developer. How you choose to go forward with all this is up to you. I can tell you from experience, you won't win (at least not without going above him), and even if you did, what you achieve may be less desirable of an outcome than you realized. You could always go to another professor, another version of the program at a different college, and earn your slip of paper, but you'll be doing so at the cost of learning what the best answer was to all the versions of these problems you'll face in the future. Upvotes: 2 <issue_comment>username_4: username_2's answer is excellent, but I'll add a bit more perspective. Your instructor sounds a bit severe but not entirely unjustified. > > Is it justifiable to reject an entire answer because more was said, > without even partial credit? > > > Yes. I do this (perhaps in smaller doses). Here's an example: If I ask the question, "find the margin of error for this estimation", and a student works out an entire confidence-interval estimation -- which includes the margin of error as an included term -- and then boxes the C.I. as the answer, then I think: "This student doesn't know what the margin of error is", and needs to be corrected. In that case I'd take off partial points, and probably receive the exact complaint from the student that you're presenting here; after a short discussion, the student usually sees that their understanding really was lacking and needs some improvement. Sometimes for shorter explanatory questions, expecting maybe two lines, a student will write a whole half-page regurgitation of all the subject matter of a particular chapter. If it's a 1-point question, and it seems like the student really can't incisively identify the particular item asked, then off comes the (entire) 1 point. When I see responses like this, I'm being taken back to bad interactions I had in the past with my own teachers or coworkers in school or industry. E.g.: a teacher that when asked a tough question, wouldn't respond on-topic but would give to a random dump of other information so they'd look smart but wouldn't answer your question. Or a coworker who would behave similarly. Either case is a big aggravation, so if I can I'd like to help escort people out of thinking that that's useful/rewarded behavior. It's so much more helpful to have a colleague say, "I don't know", so you can go search elsewhere, rather than do a brain-dump of random information on you, which is just wasting your time. > > Is it acceptable to direct students to example answers and not allow > them to explain why they think their answer is correct? > > > Yes. Especially in large courses, such as your "amphitheater" course. Presumably the instructor spent time writing up the example answers and distributing them, specifically to save time from interacting with students one-at-a-time on the issue. (See also: ["it's on the syllabus" comic](http://www.phdcomics.com/comics.php?f=1583), ["it's on the syllabus" T-shirt](http://www.zazzle.com/its_on_the_syllabus_t_shirts-235234957499314426), etc.) Remember that at most research universities (likely ones with large amphitheater courses), *teaching is not the professor's top professional priority* (in terms of evaluation, promotion, etc.). Published research is required to come first, and then teaching is effectively a secondary part of the employment -- so the time spent on the teaching side, especially on large introductory courses, has to be kept very constrained as a necessity. (Personally, I'm a dedicated lecturer, not a research professor, so I'm very happy that I get to spend more time responding to individual questions from students, providing personal feedback on tests with partial credit, etc.) Upvotes: 5 <issue_comment>username_5: It's hard to say without seeing the specific question and answer. I can easily imagine scenarios where too much information would be wrong, because it would indicate that you don't know which portion of the answer you gave is relevant. To take a silly example, if a history test asked, "When was George Washington elected president?", and a student answered, "One of the years 1750, 1751, 1752, 1753, ..." etc, listing every year from 1750 to 1850, to say, "But the correct year is in there somewhere! I am being penalized for giving too much information!" would be true, but clearly the student does not know the correct answer. It is properly marked wrong with no partial credit. On the other hand, if a student answered, "In 1789, and his vice president was <NAME>", I'd say the bit about <NAME> is not relevant to the question and I don't know why he mentioned it, but I probably wouldn't take points off. But in general, any information that is not relevant to the question likely indicates a lack of understanding of the subject. If a test asked, "What loops are available in Java?", and a student wrote, "FOR, WHILE, and IF", that's simply wrong, because "IF" is not a loop. Yes, you could say it's extra information in addition to the right answer, but it's flatly wrong. It indicates that the student either doesn't know what a loop is or doesn't know what an IF does. The best case I can think of is if a question is potentially ambiguous. Life if a history test asked, "When did <NAME> become president?", a careful student might answer, "He won the election in 2008 and took office in 2009", because the question is not entirely clear which it is asking for. And ultimately, if you give more information than asked for, you are wasting the professor's time asking him to read it. If a test asked, "How much is 2+2?", I'd expect the answer "4". Anything more than that is wasting my time, and at some point I'd start penalizing for it. *Update several years later* I just got an upvote (thanks!) that drew my attention back to this old post, and looking at it again, I came up with a better example than my George Washington one. Suppose a test asked, "What causes a compass needle to point north?", and a student answered, "The gravitational pull of Jupiter, the magnetic field of the earth, and the Coriolis effect." Yes, the right answer is in there, but the fact that the student gave additional information that is irrelevant shows that he does not know what IS relevant. He doesn't understand the subject that the test is asking about. A teacher might give partial credit, thinking the student has some understanding, but he'd be justified in saying no, you don't get it. Upvotes: 4 <issue_comment>username_6: > > Here's the situation: In the past semester's exams there was a question in computer architecture asking for X, and I, and many other students, answered Y, which contains X and is mostly revolving around X, thus successfully answering the question (in our opinion at least). > > > You are saying that, because X is in Y, Y should be a valid answer to the question. However, one could argue that the professor is testing your knowledge of X, so he expects to see X on the exam. Y might show a misunderstanding of X – it could suggest that the student is confusing X and Y, or cannot differentiate between them. I've had students answer essay questions almost as if they had been coached: *"Write down everything you can remember associated with that topic. Chances are, something in your answer will be what the professor is looking for."* I've seen that often enough that, before a midterm exam, I now warn my students to throw that strategy out the window. "Your jedi mind tricks do not work on me," I tell them. > > I would like to note that some of the stuff he rejects, he himself does in class, the most obvious being a computational error, and frustratingly he also did the optimization he rejected in the test. > > > This might be a telling clue. I know that sometimes I get frustrated when students parrot back things to me that we covered in class. My exams are not meant to see if my students can regurgitate tidbits covered in class, but to see if they can demonstrate mastery and expertise. Part of my pre-test hints remind them that I'm often more impressed by an original example than a summary of one discussed in class. So it's quite possible that, by including the optimization covered in class, it seemed once again like you were grasping at straws instead of competantly answering the question. > > numerical errors are similarly crippling to grades ... of about 80 people, only 11 had succeeded the exam. > > > I'm not sure what you are referring to here; does that means about 70 people got a failing grade? If so, that is perhaps the most disconcerting part of your question – but I still wonder if I'm getting the whole story. Are these truly *exams*, that are worth a significant portion of your final grade? Or are these *quizzes* (which is more of what I would expect if you have one on "a sequel lesson," as you say). If I was grading 80 quizzes on a regular basis, I, too, might be very stringient on what I'd accept as an answer. > > This seems to be a general policy of the professor. > > > To me, this is the crux of your question. Now that you know this is the general policy of the professor, what will you do about it? If I were in that class, I'd be very careful in my future quizzes and exams. If the professor asks for X, then I would give him X, the whole X, and nothing but the X. Evidently, this professor values the student's ability to differentiate X from Y, and not confuse the two. Without having a conversation with the professor, I can't really tell if that's motivated by pragmatic or pedagogical reasons. However, in and of itself, it doesn't seem unfair to me. --- As a footnote, I hope that your professor isn't a member of this community, because, if he is, he'll almost assuredly downvote my answer, since I've not addressed your enumerated questions, but focused on your introduction instead. Upvotes: 3 <issue_comment>username_7: If the test question is multiple choice, you can't get away saying that the answer is either A or B or C or D or E, even though that would be logically correct, the answer **is** one of those alternatives. Clearly on a multiple choice question, the answer must be a single and correct choice of one of the alternatives to be considered correct. Now the issue might be: what if the test question is a paragraph or essay description? Does an answer that contains the correct answer still be correct when there was other information that does not contradict the answer but is irrelevant to the answer? That's not immediately obvious to me. It seems to me to depend on how much irrelevant information is in the answer. Upvotes: 0 <issue_comment>username_8: I am an ex-physicist and such an expanded answer would be fine in my book for a scientific course. Now in industry, I work an awful lot with legal and compliance groups where answering **exactly the answer and not an iota more** is key. So an expanded answer would get in management or similar courses a reduced grade. Zero points is not acceptable, though. Upvotes: 2 <issue_comment>username_9: The questions as formulated appear completely clear to me, and I'm surprising that the following seems to be a dissenting opinion: 1. No, it is not appropriate to lower the grade for an answer that contains more than is asked for. A clear distinction should be made between (a) correct answer supplemented by correct additional information, and (b) multiple answers only some of which are correct. The examples in the other answers pertain to type (b). If the additional information cannot be reasonably mistaken for a second answer, or it can be but this answer would be also correct, deducting points for including it is clearly inappropriate. A somewhat separate situation is if the given answer only contains the assumed answer implicitly: then the teacher may judge that making it explicit is an additional step that the student failed to do. This would be petty in many cases, but not inappropriate. 2. No, rejecting an answer that is better than the required one is clearly inappropriate, there are no ifs and buts here. The answer must solve the given problem under given, explicitly formulated constraints, if it does so, that's it. 3. No, it is unacceptable. It is part of professor's job to give a fair hearing to any student claiming that their solution is misunderstood. If there are contradictions between the sample solution and the student's one, the professor should either point out a mistake the in latter (ideal), or say that it's on student to explain what's wrong with the former (acceptable). But if there are no contradictions, the fact that they are different does not say anything about the correctness of either of them. Upvotes: 0
2015/12/29
643
2,651
<issue_start>username_0: My thesis supervisor is not reviewing my thesis. I wanted to discuss for my three years degree the development of my 3D engine (Computer science thesis). I mailed to the professor about my intention, he wanted to see me for a brief meeting (June) where I explained him what's different in my work and a short roadmap of work that had to be done and what I was already developing. I completed my thesis PDF in last days of August. I mailed about that to the professor and he replied he was busy so I had to skip the graduation session of October, he just asked to hold on. Then I mailed him again, I got no reply until it was too late for session of December. In last email he just took a quick look at my thesis saying stuff "it is not a typical thesis, however for a 3 years degree it should do even if it has ingenuities I will contact you before Christmas, I should have enough time to follow you until Genuary the 10th, however I remember you it was your initiative". Shouldn't be a thesis anyway my initiative? However now we are After Christmas and still got no reply. What I should do? it has been almost 6 months without having been followed. I do not want to skip also the March session. What should think someone hiring me? I completed all exams in time and with good rating. Now Employeers will just see someone "graduated in 4 years instead of 3" just because a teacher did not review my thesis when I did a enourmous amount of work in research, study and development. Why simply the professor didn't told me "I have no time to follow you, search someone else"?<issue_comment>username_1: You should have looked elsewhere when you noticed you weren't getting *any* attention. Perhaps you should take your work to somebody else who will look it over and graduate with him/her instead. Or look for a new thema. My condolences. Upvotes: 2 <issue_comment>username_2: I had one of those issues when I was doing a PH.D. You should contact your supervisor first and cc the department head. That way you have more people in the loop. In that email, make an appointment. If the supervisor doesn't get back to you, that's a red flag. You will be able to then speak to the department head about it and he/she will have some context when you see him/her. In that meeting with the head, speak about your issue. They will solve the problem. If your supervisor does get back to you, which I assume he/she will since the department head is now in the loop, don't be confrontational. Just have a normal discussion about how you couldn't get a response and that you need more help. That's how I'd approach it. Upvotes: 2
2015/12/29
1,360
5,842
<issue_start>username_0: Suppose I have envisioned a concept that I am fairly certain has been explored before. I can describe its nature and its characteristics, but I do not know the formal name of the concept, nor its history and prior development. My objective is to formalize my research by identifying the concept’s formal name and basing my research on its preexisting foundations. (My research may or may not elaborate/extend the current state of research; it could simply be a search for an existing concept I want to investigate). **What would be the name (ie. the definiendum) of the process of finding both an existing concept and porting your notions of that concept onto the formalisms previously established for that concept?** Up to now, I have been informally using the terms *rebasing* and *porting,* as in > > rebasing/porting my own informal understanding of the concept onto its formal foundations > > > If no such word exists for what I’ve described, I do offer *rebasing* and *porting* as self-descriptive definiendums. Perhaps the first part of my definition, i.e., finding an existing concept, can be considered optional as it is well-described by *reverse lookup;* however, the latter part is key. To make it more clear what it is I am after, here is an example: Alice is studying networks, and exploring how to permit communication between any two nodes using the fewest connections possible. As she develops this concept, she suspects it might have already been fleshed out in academia, so she searches for the name of the concept and finds spanning trees. Now that she knows what it is she has conceived, she can explore the concept further. She can also *translate (port, “rebase”, etc) her personal, informal language, notations, conventions, and concepts onto their corresponding formal equivalents previously established in academia.* What is the name of the translation/porting/rebasing/etc process described in italics in the previous sentence?<issue_comment>username_1: I'm not sure that any such specialized term exists, or at least I have never heard of such. The process, however, is quite familiar to me. I find it most interesting when I discover that several different subjects have all approached the same topic from different directions, each typically having invented its own largely unrelated terminology. As for what to call it? I would typically refer to it simply as searching for related work and then connecting my work to the prior results. It might be a fun target for a neologism, though! Upvotes: 2 <issue_comment>username_2: I don't have a name for the two elements you describe, but I do have names for each distinct step: 1. *"What would be the name (ie. the definiendum) of the process of finding ... an existing concept ..."*: I call this **"search for concepts"**. [I have an article that describes this in some detail](http://ssrn.com/abstract=2699362), but in brief, there are two main components: first search past literature for the concept along with all the synonyms that you are aware of; second, identify new synonyms for the concepts from the studies that you identify and then search for those synonyms in the literature. You continue this process iteratively until you've satisfactorily identified the concept and all its synonyms. 2. *"What would be the name (ie. the definiendum) of the process of ... porting your notions of that concept onto the formalisms previously established for that concept?"*: I would simply call this a **redefinition** or **clarification** of a concept. For me, it's that simple, but please note my following comments. It seems that in your search for a new name for this process you place a lot of weight in the novelty of the fact that you are incorporating your own notions or insights into the new and improved version of the concept. However, this is what every researcher does if they define a concept. They are not simply repeating prior definitions, or else they would just quote "So et al (2015) defined it as 'la la la'". By offering a definition in their own words, they are always incorporating "personal, informal language, notations, conventions, and concepts onto their corresponding formal equivalents previously established in academia". I don't see any need to coin a new term (such as "rebase" or "port") for something that is nothing more to me than offering a new definition of an existing thing ("redefinition") or "clarifying" a previously ambinguous definition. One thing you did not mention, though, which I consider extremely important if you offer your own new definition of an existing concept: You should very clearly explain why the past definitions are unsatisfactory for your purpose. Why are you multiplying definitions for the same general thing? You don't need to say that past definitions are bad; you only need to demonstrate that your purpose is different in an important way, and that past definitions do not do it justice. If you cannot clearly justify this, then it is best to rather pick the best of the existing definitions you have found (and explain why the one you picked is best for your purposes). Also, please note that my answer applies strictly to dealing with existing concepts, not to new concepts that you have discovered or invented; my answer would be different for defining a brand new concept. Upvotes: 1 <issue_comment>username_3: I hereby define porting, in the context of conducting research, to be the process of adapting one's notions of a concept onto previously established theories, terminology, and conventions for that concept.[1](http://www.ryoga.org/2016/01/19/on-looking-up-a-concepts-name-and-a-neologism-for-adapting-ones-notions-of-a-concept-onto-previously-established-theory-terminology-and-conventions-for-that-concept/) Upvotes: 1 [selected_answer]
2015/12/29
873
3,787
<issue_start>username_0: Lately I've been wondering if I should drop out of a PhD program (in economics). After graduating with a bachelor's degree, I desperately wanted to know more about the field, and I enrolled in a PhD program. I passed qualifying exams without any problems, and it's time to do research. The problem is, as I am beginning to realize, I am not at all sure if I want to do research. I believe what I really like is learning new things and applying that knowledge in e.g. conversations, real world problems, etc. I'm generally very happy and excited when I finish reading a paper, a book chapter, or solve a mathematical problem. But the fun stops there. I have no interest in thinking up my own problems because the way I see it, *there are already so many papers or books I haven't read*, that I'd much rather spend my time studying the material than going really deep in one particular subject and focusing on it for the next few years (and probably even longer if staying in academia after graduating). Is this a common way to think at this stage? I've talked about this to a friend in the same program, and he doesn't seem to understand. He is the very opposite of me: he did quite poorly in the exams, but seems to have quite a few ideas (of which most are flat out bad and have been shot down by his supervisor). Whereas I haven't even presented any ideas yet because I don't have any good ones, and would rather study more. I've realized that perhaps I am just an information sponge, and not a researcher? Can anyone relate to that? I feel like I want to know a little (well, a lot actually) about everything, and not everything about a very specific topic.<issue_comment>username_1: Instead of thinking about research as 'thinking up my own problems' - is there something that you read that your immediate reaction was 'that seems weird, I wonder why that it is?' and then not been able to find the answer? While it is true that research is about finding problems that other people haven't investigated yet, it is more true that research is about finding answers to those problems. Reading widely is a good thing as it exposes you to different approaches and different potential sources of the question that sparks your interest. However, you are probably getting to the point that you should be focussing on a specific problem (since you have passed exams you must be some way through). I don't know what your program allows, but you might consider an interdisciplinary area where the balance between breadth and depth is different. Upvotes: 2 <issue_comment>username_2: Wanting to understand things deeply should be the main reason for getting a PhD, so I think you're okay there as long as you're okay with the *deeply* part. Now just realize that's what research is, and when you understand things deeply, you often discover something new. You of course won't know if (novel) research is really for you or not until you try it out, but I don't see any warning signs. One of my friends (who is considered a major leader in his mathematical field) had thought that there is so much beautiful math out there that, if we didn't have to do research for our career, we might spend all of our time just learning different old math and not creating new math. (I think this is a bit of an exaggeration, as we would naturally create new math when understanding old math, but it illustrates the continual desire to learn new things as an academic.) Also, there is a wide range in the research vs learning spectrum in academia. Many professors just work in a very focused area all their life, and others spend all their time teaching and learning and writing very few papers. But I think as a whole, PhDs love learning for the sake of learning. Upvotes: 2
2015/12/29
817
3,697
<issue_start>username_0: I took a Theory of Computation class years ago through my university's math department that I really enjoyed and found interesting. If I wanted to further pursue study/research/specialization with Theory of Computation what what kind of Ph.D program should I be looking for? Computer Science or Math. I am leaning towards Math grad school since I not inclined to take take classes about Database and OS design. Has the prevalence of computer science programs in major universities during the past 30 years made the new researcher in the Theory of Computation normally a CS grad student or is it normal for pure math grad students specializing in theory of computation?<issue_comment>username_1: Check the curricula of prospective schools, contact potential advisors/research groups. I'd wager that graduate studies in theoretical computer science will rarely include the kind of subjects you want to avoid, at least not as mandatory classes. Upvotes: 1 <issue_comment>username_2: Generally speaking, in the US, the theorists will be in the computer science department, but it will not be hard to make arrangements for you to work with them as a student in the math PhD program. Would you prefer to TA and eventually teach introductory programming courses, or calculus courses? That's probably going to be the biggest difference (especially down the line after you graduate) between the two options. Upvotes: 3 <issue_comment>username_3: I can't say much about the CS side, but I can write about what it would mean to study computation on the mathematics side. Perhaps if someone else writes about the CS side, the OP can compare. In mathematics, computability is studied as part of mathematical logic. The key initial concept is Turing computability, but the focus is just as much on *non*-computable objects. So the structure of the Turing degrees is a key topic at first. Within computability theory, there are many areas: "classical" computability, higher computability, Reverse Mathematics, computable analysis, and crossover with areas such as proof theory, effective model theory, and effective descriptive set theory. Topics such as computational complexity theory, compiler/language theory, and automata are not studied as often in mathematics departments. The methods used are very mathematical. Mathematical computability theorists tend to focus much more on *proving* results than on *implementing* anything. For your qualifying exams, you will need to learn several other basic areas of mathematics at an introductory graduate level, such as real analysis and/or abstract algebra. An undergraduate degree in mathematics, or very good mathematical preparation otherwise, is required to be admitted to a PhD program in math. The field of computability is not extremely large, which can make both finding a PhD program and finding an academic job more challenging. Essentially, mathematical logic as a whole is only as big as a subfield of many other areas of logic, and then computability theory is only a part of mathematical logic. Upvotes: 2 <issue_comment>username_4: I was just in this situation, graduating with a PhD in Theory of Computing. I would recommend looking for an Applied Math department. Although most applied math departments are oriented to scientific applications (like partial-differential equations), you may find that they offer a great deal of flexibility in coursework. I was able to take all the math classes and all the CS courses I wanted. I was able to skip a lot of the non-theory CS classes (operating systems, databases, etc.) and a lot of math classes that didn't appeal to me either (analysis, algebra, etc.) Upvotes: 0
2015/12/29
3,597
15,588
<issue_start>username_0: I was using a Web service I particularly like the other day, and noticed that they have a referral program. (More specifically, [UsabilityHub](https://usabilityhub.com/), but the details of the service itself aren't important.) So then this hypothetical scenario came to mind. Suppose I'm slated to teach a class, and decide that said Web service would be valuable in my teaching. Let's say that: * It's genuinely useful in my pedagogy and/or for helping students produce quality work. * It's a commercial service and has no student discount, but the most relevant parts are free. * I don't *require* use of this Web service, but I *encourage* it (though not through any formal means, i.e. no extra credit for using this service). * I happen to use this service for both personal/hobby projects and for professional work, all under one account. Now suppose that this Web service provides a **referral program**, i.e. if anyone signs up with my referral link, both I and the new user get benefits of some kind. **However, these benefits are not monetary in nature--they're a form of credit for the service, and cannot be redeemed for cash.** **Is it ethical to ask my students to sign up to this Web service through my referral link, given that I use this Web service in both a personal and professional capacity under the same account?** Let's say that I tell them everything that I mention in this post, too (including my usage of the service). Oh, and I should clarify that **this is entirely a hypothetical scenario**. I'm not even in a position where this is actually an issue for me.<issue_comment>username_1: This is not fundamentally different from recommending (not requiring) a textbook for which you receive royalties. I would be very uncomfortable doing either. For another comparison, my students can buy others' textbooks from Amazon for less than from the bookstore. I could have an Amazon Associates account that pays me every time a student buys a book through my link. (So, both the student and I receive a benefit.) I have chosen not to do that because I believe it would be unethical. I refer you to Meyer's\* Rule of Ethics: in any ethical dilemma, the thing you least want to do is the ethical choice. \* From the Travis McGee novels. Also from <NAME>. Upvotes: 1 <issue_comment>username_2: I don't see any clearly unethical behavior here. You use the tag "conflict-of-interest"...but where is the conflict? To me it looks more like a confluence of interest. The best argument I can make is that the behavior might *appear unethical* to others, and as a professional and representative of a professional group (your department, your university...) you may well want to avoid that. A hard-nosed outside observer may say: how do I know that you did not gain a financial incentive *at some pedagogical cost*? If I were in this position, I think I would not have ethical qualms provided that I made clear to everyone how to they can make use of the service for no money and get the full course experience anyway. I would regard this behavior as more ethical than requiring students to pay a fee (typically on the order of $50-$100 per course) to use a professional service like WebAssign for required course elements like homework grading. In my mind, providing good assessment and feedback to the students is a key responsibility of the course instructor, the department and the university. Requiring -- or really, even sufficiently strongly encouraging -- students to pay for an external service is an alarming abdication of that responsibility. Moreover the slope looks quite slippery to me: this is clearly a step towards a for-profit model of higher education. Just because in this latter scenario the money is getting put in the pocket of a big company rather than the instructor or the department does not make it any better to me! Upvotes: 4 <issue_comment>username_3: There is a saying, ["Caesar's wife must be above suspicion,"](https://en.wikipedia.org/wiki/Pompeia_(wife_of_Julius_Caesar)) which I believe applies well to an ethical dilemma of this sort. Despite the somewhat nasty origins of the quote, what it has come to mean is that in some circumstances one must not only act ethically but *also* act to ensure a very clear appearance of ethical behavior. In this case, the question is this: could a reasonable person question whether you might have chosen to recommend this service in part because of the benefits that redound to you yourself? In this case, since there are a number of other ways that you might have gone about obtaining similar capabilities for the class, it would be reasonable for a student or other person to wonder if you were influenced to recommend that particular site due to the benefits that you would obtain through the referrals. Given your position of authority, then, I believe that does create a genuine ethical concern regarding the referral program. I see two basic approaches to resolving this: 1. Recommend the service, but do not make use of the referral program. 2. Recommend both this service and several of the other competing services. Note that this is the one that you personally use, and for this service provide both a non-referral and referral link (being transparent about the benefit to you). Personally, I would prefer the first option, since it's completely unambiguous, but would not find the second particularly problematic. Upvotes: 6 <issue_comment>username_4: > > Is it ethical to ask my students to sign up to this Web service > through my referral link, given that I use this Web service in both a > personal and professional capacity under the same account? Let's say > that I tell them everything that I mention in this post, too > (including my usage of the service). > > > Before answering the "is it ethical" question, let's recognize that **this would create (at least) the appearance of a conflict of interest**: by managing the course in a way that benefits you monetarily on top of your normal salary, you can cast doubts in the eyes of the students or other outside observers as to whether your actions are purely motivated by the desire to get the best educational outcome or are influenced by the external, and potentially conflicting, interest of making money. Now, it is implicit in your question that you recognize this issue and are trying to think of ways to allay those doubts. Unfortunately this is not as simple as it seems. Consider some of the things you are proposing to do: * *Advertise publicly that you believe in this product and use it yourself.* The problem with this is that those actions are also consistent with the actions of a person who is "in it for the money" and doesn't actually believe in the product, but wants people to think that he does. * *State emphatically that the students' use of the service is voluntary and absolutely not required.* The problem is that this could still exert a subtle pressure on the students and signal to them that you would be happier with them if they used the service. You are an authority figure and hold substantial power over the students, and some of them may want to curry favor with you, either consciously or subconsciously, and might therefore use the service even if they don't think it's useful to them. Also, even if only a few students end up being influenced to use the service by your recommendation, you are still benefiting monetarily so this does not remove the suspicion of an action motivated by a conflicting interest. * *Share the link to this Academia StackExchange question with your students, to let them know you have thought seriously about the ethical implications and strengthen their belief in the nobility of your intentions.* Again, the problem is that an **insincere** person whose intentions are **not** noble might behave in exactly the same way. To summarize this part of the answer, although it seems likely that your motivations are pure, which I assume is why a wise person like username_2 decreed in his answer that there is no conflict of interest here, it is still the case that there would be an **appearance** of a conflict (which would also mean that at least in principle there could be an **actual** conflict), and it is not obvious how this appearance can be completely removed. Now let's turn to the ethics question. I happen to be on a committee of my institution that oversees potential conflicts of interests (which we refer to as COIs). The context is different (COIs that arise in scientific research funded by a mixture of public and private money on topics with commercial potential like pharmaceuticals) but many of the issues are similar to those raised by your question. One of the first things you learn when working on such a committee is that academia has many areas with a strong potential for conflicts of interests. Unfortunately it is completely impractical to take an approach that would simply forbid such conflicts to exist (e.g., by forbidding research to be funded by an entity with a commercial interest in the outcome of the research), since that would mean that a lot of very important research would simply not get done. So, the question becomes instead *how to manage* the conflict by taking steps such as disclosing the conflict in various ways and other measures. The point I am trying to make is that just because there is a potential for a conflict doesn't mean that the proposed action is unethical. However, it certainly means that special care is necessary to make sure you are not even *perceived* as acting unethically, and it also means that there would have to be a fairly compelling reason for the proposed action to be taken. In your case, I have to admit that I am not seeing such a compelling reason. The small monetary gain and other benefits you are likely to receive from the referral links are overwhelmingly smaller and less significant than the benefit of a research that could lead to the development of a new drug or medical device. So I think the risk that your actions would be perceived negatively by your students or employer in this case far outweighs what you stand to gain. My recommendation is therefore: don't do it. The only exceptions I would make are 1. if you ask your department chair and he or she specifically approves this; or 2. if you announce that you will donate the proceeds you will make from the referral links to charity (and specifically, a charity whose mission is completely uncontroversial and could not possibly be frowned upon by any of your students). Upvotes: 3 <issue_comment>username_5: As the meaning of "ethical" is rather personal and there is no evil lurking here I suggest you get around the problem. One way of doing this is to... 1. Make the issue clear in your syllabus or course site: just stating what you write in your question should do it. 2. Offer you students two links, one being through the referral program and the other not, while explaining the differences in terms of consequences both for you and the student. (having access to click-statistics would be interesting, btw.) 3. To be extra-safe, inform you head of department. Upvotes: 3 <issue_comment>username_6: My opinion as a former student: It is known, that a percentage of persons in a position of authority are seen as corrupt or at least not as honorable/neutral as they should be according to there position. For example I know about doctors which get gifts worth at least 4-5 digits sums from companies for prescribing their drugs instead of other ones. A patient knowing this might ask himself if alternative drugs might be cheaper or more effective and the doctor prescribes it only for his incentitives and question if he does his duty to the best of his knowledge and belief. The same goes for teachers at universities. Your question suggests that you don't want to be one of them. I know that neutrality is impossible for universities because * they have limited funds and they get free learning material or samples from private companies for their own products * there is no company neutral product/method and there is no time in the curriculum to cover all of them at the same extend So what should you do? * If you are affiliated to a product/company or get money directly or indirectly, make it clear * There should be no comparable product, which is noticable cheaper. (comparable in the sense of quality and depending of the use of it in class) * Especially if it is not the standard product used by most other universities and there is no special reason for it. * If you are unsure, ask a neutral colleague, if they would recommend this product for this course. Other points to consider: * Is it free for the students and are the students required to use it? * What/How much is the incentive? (like there is a free plan which gives you 15 free x and with the referral you get up to 30 free x or you get x$ per person up to unlimited, especially if multidigit sums are expected for referring all your students) * What are the benefits for the students? Do they also get free x? Is this the only affordable way for the university to provide an acceptable teaching standard? Is it even connected to the course? We once managed to get a class replaced by the administration because the teacher was paid by a company and tried to force all of us to get a credit card and use an online service of the company which didn't provide any significant benefits for this class. In my opinion there is no black/white line. It should be clearly visible that you don't recommend it for your benefits but for the benefits of the students. Sometimes it's better to show it instead of only saying it which sometimes requires not to get benefits or at least provide a non-referral link too. Not only one politician was discredited because they awarded a big contract to commpany from which they got big donations while elections although they said this was not the reason for this choice. Upvotes: 2 <issue_comment>username_7: **Create a second account with your university email address and use that referral code**. --- There's a clear conflict of interest by using your professional role to benefit your personal side projects. If I were a student, noticing a referral link in which you benefit personally would be a *huge* turn off. However, you are missing an easy opportunity for yourself and your students to benefit from this. When you act under your professional role that benefits the university, your students, or your department, use this account and the credits that come with it. When working on side projects, use your personal account and cover the costs yourself. When using the link make it explicitly clear that you/your department are getting a credit for it. Because all the credits obtained are going back to the students and the university, the conflict of interest gets eliminated. Upvotes: 3 <issue_comment>username_8: Similar to several other answers: * When I taught computer science and referred students to the company I consulted for I had the company send the referral bonus to the department. * When I assigned a textbook I'd written I arranged a no-royalties price with my publisher. Had that not been possible I'd have donated the royalties to the university. (I think that in fact my school's faculty union contract specified that I could not profit from those royalties, but I'd have done what I did in any case.) You should be able to make a similar arrangement for the web referral. Upvotes: 2
2015/12/30
1,373
5,521
<issue_start>username_0: I've been asking a lot of questions on here lately (my apologies to the admin for that), but I've become very worried about my future as I've had a terrible semester. I'm a second year student currently studying mathematics at a Canadian university and as of now, I have two Cs (one in a math course) and an F on my transcript (thankfully, from a non math course). I'm very worried about how this will hurt my application to grad school (in terms of getting research experience and my transcript itself), particularly as a MSc candidate. I know I have a lot of time to make up for my mistakes (academically) this semester, but the way I see it, I'm pretty much out of options until maybe next year when my GPA (currently sitting at ~3.4-3.45) goes up, considering a lot of research opportunities at my school are based on GPA. I am very interested in pursuing research in some areas of pure mathematics. However, like I mentioned earlier, I know for a fact my school operates research grants and applications based on GPA and seniority, thus because I: 1. Am a second year student and do not have many higher level maths courses under my belt and; 2. Have a mediocre GPA and; 3. Have received lower marks for second year courses rather than first... it is highly likely I will get rejected by the professors at my school and not get the grant. My question now is **what else can I do** to help me gain experience/skills/contacts within the department at my school? My lackluster grades were a result of poor time management and I know I can do better. When I asked a similar [question](https://academia.stackexchange.com/questions/60720/as-a-2nd-year-undergraduate-what-should-i-do-to-enhance-my-maths-grad-school-ap), I was recommended to seek professors to do informal reading with. I am still planning on doing that, however, is there anything else that I can do? I was hoping to ask one of my profs, but I ended up doing much worse in his course than I expected, so I don't know if contacting him would be reasonable anymore? Anyway, this is a question mostly to see what I can do to redeem myself more than anything. I still have two more maths courses this coming semester I have the chance to do well in, and so far this C would be my lowest maths-related mark yet. Hopefully that doesn't completely take my out of the running. Thanks in advance for all your help.<issue_comment>username_1: > > a lot of research opportunities at my school are based on GPA. > > > A big reason for this is that doing research consumes a *lot* of resources: time, attention, mental energy, etc. At my university, students who don't have a certain minimum GPA aren't eligible for department-funded research positions. The reason for this is that it's not in the students' best interest - if they're *already* doing poorly in their coursework, doing research at the same time is liable to make things a lot worse. > > My lackluster grades were a result of poor time management and I know I can do better. > > > Great. That's what you need to do, in order to gain access to research opportunities in later semesters: show that you *can* manage your time, *can* excel in your coursework, and that a professor who takes you on as a research assistant isn't going to have to worry about you ending up on academic probation because you can't juggle it all. For grad school applications, too, showing (by means of excellent grades in later years) that you have more than recovered from your not-so-great sophomore year will be important. I'm not in mathematics myself, but it's my impression that mathematics (more so than most other fields) requires a lot of prerequisite knowledge before you can do meaningful research. Thus, without having done well in your coursework, including higher level courses, it is unlikely that you will be able to do meaningful research. (Even if you have done higher level coursework, some mathematicians on this site consider undergraduate research experiences to be more "experience" than "research," anyways.) See for example [Is it common for an undergraduate thesis in pure mathematics to prove something new?](https://academia.stackexchange.com/questions/49595/is-it-common-for-an-undergraduate-thesis-in-pure-mathematics-to-prove-something). Perhaps one of the math users will chime in with more details. Upvotes: 2 <issue_comment>username_2: Honestly, you should take a step back and focus on your coursework first. Research experience is great and often necessary for graduate applications, but it's useless if you don't have good grades in upper-level classes. Focus on showing that you can do well in courses before worrying about adding research to your workload. Also, excelling in an upper-level class is the probably the best way to connect with a professor and get a research opportunity at this point. If you can get to know a professor a little through class and office hours and impress them there, then it's much easier to ask them after the class is over about research opportunities. The key, however, is that you have to actually impress them, and no professor is going to be impressed by a C or even a B in their class. It also helps if you are willing to work as an unpaid research assistant or similar. Getting funded positions as an undergraduate can be very difficult, especially in math since it's unlikely you will do any real work. But unpaid positions can still be very valuable and are much easier to obtain. Upvotes: 2
2015/12/30
1,140
4,880
<issue_start>username_0: I hope I don't sound like I'm complaining, and I understand professors and academics are humans. My question is more of one of inquiry rather than venting. I have e-mailed a few professors for certain professional opportunities. This is not my first time doing so, and I am familiar with proper e-mail etiquette and how to write a message to minimize the chances of a glaring ignore. I have also received replies in the past. However, I have e-mailed a few professors about a week ago, and my inbox is crickets so far. On average, is it customary for academics to take a break from e-mailing during the Holiday break?<issue_comment>username_1: Yes, it's common. Many take a break from work entirely during this season. Some reduce their working hours to spend time with family, and only answer urgent email. Some may be traveling and have limited or no access to email. Some (like me) may be swamped with grading and other end-of-semester duties and are focusing on those at the moment. If you haven't heard back by mid-January, try a polite follow-up then. Upvotes: 6 <issue_comment>username_2: While we can't guess for the particular professors you're dealing with (and some professors never really take any time off), it's quite common for people in the United States to take significant time off around the holidays. The more responsible ones will set a vacation message letting you know when they'll be back, but really, you shouldn't really expect to hear anything from anyone until significantly after New Years. Upvotes: 4 <issue_comment>username_3: Leaving aside the timing of your request, if you were not my student and cold emailed me about a "professional opportunity," I would view it as academic spam and would put it in the lowest priority queue, right after re-upping my free professional journal subscriptions. It might just be your phrasing, but from my perspective as faculty, it sounds more like opportunity cost than opportunity. Upvotes: 6 <issue_comment>username_4: In addition to the excellent answers given already, you should also consider that even those professors that do answer emails during the holidays will usually do so based on perceived priority - and I doubt that your cold-call "professional opportunity" email is important enough to many professors to answer between visits to relatives and stuffing them with whatever their traditional holiday food is. You should try resending again when the holiday period is over. Upvotes: 4 <issue_comment>username_5: I would think that as professionals, most check their emails several times a day. That, of course, is not the same as responding. Upvotes: -1 <issue_comment>username_6: It's easy for stuff to fall through the cracks during the Christmas-->New Years vacation time, at least in countries for which this is the central winter holiday period. Don't take it personally. I would find out, for each professor, the academic calendar where they work (this is always easy to find online). I'd wait until the middle of the first scheduled week of classes, and send them a polite follow-up email. If they don't respond within a week, then they are probably not interested. Upvotes: 1 <issue_comment>username_7: First, you are not entitled to the professors' time in matters of your personal professional development. You asking for a favour from them here, and, even if you frame it as a matter of etiquette, it is something you want from them and thus, you should consider yourself more on the side of the obligation then them. Second, concerning permanent reachability per email: I do not expect my research fellows, PhDs, MSc or BSc project students to respond to emails etc. during weekends or vacation; my arrangement with them is done in advance in such a way that contact during their rest/break time becomes necessary only in the rarest emergency; in which case I escalate to phone contact. You may want to consider the possibility that the professors you are contacting may want to enjoy the same privilege. Clearly, as your search for a professional opportunity, this was not unexpected and thus in no state of extreme urgency, but could have been planned ahead of time by you to avoid the holiday period. To come back to my illustration, the test is that if you feel it may be intrusive to ring the professor up at 23:00 on Saturday night or during the morning of Boxing Day (which you likely will), you should also not expect an email from them during this time (at least I hope it for the sake of their family). That being said, some people do not respond during their absence, but then empty their mailbox after they are back to work; I would consider this equally disrespectful on the flip-side of your problem. For such people (or simply forgetful ones), a polite reminder, shortly after the vacation ends, is perfectly in place. Upvotes: 0
2015/12/30
1,362
5,998
<issue_start>username_0: We obtained a report for the last academic year. A shocking revelation was that the capacity utilization\* of lab equipment, laboratories, instruments and machines is only 37%. Everyone seems to be happy with this result, most of the comments are that science can't be lucrative and researchers cannot be forced to work with already existing equipment. If they want to expand research and buy a new machine (even if we have 3 already) that should be allowed. "Your grant your rules". Is there any organization method to overcome this "abeyance" of overall facilities at faculty? \*Utilization was calculated by time and resources being used, versus number of publications and number of employees (students, professors, researchers).<issue_comment>username_1: Unfortunately, the 37% capacity utilization number is essentially meaningless on its own, because it does not directly relate to the actual metric that one might wish to optimize. When attempting to improve efficiency, it's important to have a precise understanding of what the ultimate metric is that it being maximized. In a university research setting, the metric being maximized is *not* utilization of equipment, but rather something regarding research being accomplished and effective use of grant money. To maximize such metrics, it's actually *important* that some equipment be idle some fraction of the time. The problem is, research projects typically have highly uneven and unpredictable resource utilization profiles. For example, my collaborators will often have 1-2 day bursts of flow cytometer use, in which they use a $200K machine for an hour or so every few hours, followed by a weeks-long gap while they prepare the next experiment. It's very difficult to interleave usage during such bursts without distorting somebody's experimental plans, and an experiment may need to be started several days before the flow cytometer is first run. Counterintuitively, this means that *overall experimental efficiency demands that flow cytometers stand idle most of the time*. An even more extreme example is common tools like pipettes or screwdrivers: if anyone ever needs to spend more than a few seconds looking for such a tool, then operations are clearly inefficient. As a professor of mine once told me: "If you can't just reach out and pick up the screwdriver you need, you don't have enough screwdrivers." This means that such common tools must have exceedingly low capacity utilization in order to used efficiently as part of the larger workflow. That same professor, on the other hand, now runs an operation in which an automated high-throughput mass spectrometer is carefully scheduled to run 24 hours a day, since it is the key high-value bottleneck of an entire pipeline. In that case, efficiency means 100% utilization (but also that when they have grown enough, they will probably add another mass spectrometer). Bottom line: if you want to improve efficiency, knowing the utilization of equipment is a useful starting point, but it can only be properly interpreted in terms of the larger workflow in which that equipment is used. Upvotes: 4 <issue_comment>username_2: The U.S. Department of Energy has a system of user facilities. Access to these facilities requires an application and scheduling. Access is typically free. At least some of these facilities reach 100% utilization. Each tool in each facility may have its own rules. These are often listed on websites. Facilities are incentivized to reach their utilization targets with funding. Example: <http://www.anl.gov/cnm/user-information/user-access-program> Upvotes: 2 <issue_comment>username_3: To complement earlier responses, two types of cost come into play: 1. context-switching (when you have to go to a different department to use the tool) or re-calibrate whatever tool you use every time you take it from the joint pool; 2. unpredictability due to use fluctuation. It is known from queuing theory that with queue use approaching full (theoretical) capacity the fluctuations grow significantly. Sometimes not being able to estimate whether a tool will be free to use can hamper your productivity more than the cost of the tool. There is a reason why a rigid scheduling regime is implemented only for really expensive devices. So, 37% may represent a perfectly good balance for these costs and as measure of utilisation it is not sufficiently informative in isolation. Upvotes: 2 <issue_comment>username_4: Take my advice - invest in human hamster wheels to generate lab electricity, rather than buying it off the grid. So long as you make sure the wheels aren't **too** efficient, it should be possible to get most if not all of your researchers utilizing the equipment, 24hrs a day, 7 days a week. You may see a slight dip in the utilization of other lab equipment, but the high usage in both number and duration of use of hamster wheels should pull up the whole departmental average. [![enter image description here](https://i.stack.imgur.com/2Gtqc.jpg)](https://i.stack.imgur.com/2Gtqc.jpg) I am, of course, being facetious. A lab is not a factory that turns some arbitrary raw material into widgets en masse. Time and motion principles of labour management simply do not apply. If anything you want scientists using equipment **less** for the same amount of science published. In all seriousness, the only real machine whos utilization is worth tracking would be the coffee machine. Lab efficiency however is a different ball game. Labs can be more or less efficient, however thats a different question for a different post :) Upvotes: 2 <issue_comment>username_5: Well if you're looking into increasing your resource utilization you might want to check out [Clustermarket](https://www.clustermarket.com/) - they offer sharing your equipment externally and gaining an additional income which is always an added bonus. They also have a white paper about resource utilisation that could be useful. Upvotes: 2
2015/12/30
571
2,504
<issue_start>username_0: I received some advice from two researchers, which were helpful at the time. So, I mentioned the researchers in the acknowledgements section of my paper’s early version. After the first draft, the paper has been substantially modified, and their advice is no longer that visible on the paper. Should I still keep the acknowledgement? I feel it is difficult to remove it, since the paper is being circulated between more than ten co-authors. I would have to give some reason, but then I don’t know if I look silly for saying that their advice was after all not so useful. What to do?<issue_comment>username_1: If you consider that they helped you, and merit some thanks, acknowledge them. That of the original advice nothing remains in the current version isn't so relevant. Help can very well be something that doesn't have a direct result in the text. Upvotes: 3 <issue_comment>username_2: In your particular case, you could consider changing the acknowledgement to something like: > > The authors would like to thank <NAME> for his constructive feedback on an earlier draft of this paper. > > > In this way, you still acknowledge the person for their feedback, even though it may not be as relevant in the current version of the paper. I should mention that the advice above may be field dependent. For example, in my subfield of EE, what I suggested above is a common solution for your particular situation; however, as @Wrzlprmft pointed out in the comments, my suggested modification is a typical acknowledgement in other fields *"to indicate that somebody performed an internal peer-review."* In the event that you modify the acknowledgements (or make any change to the paper, for that matter), I suggest that you circulate this modification to your co-authors so that they are aware of the change. Upvotes: 3 <issue_comment>username_3: > > *How to decide who to include in the acknowledgements?* > > > **Generously.** If people gave up their valuable time to help you in the process of developing that paper, it is kind to acknowledge that. This is true even with contributions that are very helpful but may not be directly reflected in the final text, such as avoiding a long rabbit-hole or programming for some data collection or analysis part of it. Per username_2's answer, you can have the acknowledgement accurately reflect that the contribution was to an earlier draft, or whatever the contribution was, but they still helped. Upvotes: 4
2015/12/30
953
3,605
<issue_start>username_0: My apologies in advance if this questions is better suited to be migrated elsewhere. In a standard (X,Y) Cartesian line/dot plot the axis should be placed in a contextually logical place. In a graph presenting real world quantification with only real positive integers that are possible (>= 0) I am told that the only possible intersection for the axis is (0,0). Even it seems at the expense of clearly seeing a point at X=0... I would like to ask if in academic publications there is any validity to permitting your reader to see all your points clearly by applying a subtle shift (padding the axis to the left). See figure included. [![enter image description here](https://i.stack.imgur.com/pOLkj.jpg)](https://i.stack.imgur.com/pOLkj.jpg) The opposition I am receiving to this concept stems from the inference that the y-axis being placed in a negative region gives rise to the possibility that negative values are possible, even though it's a physical impossibility in this context to go below 0 in terms of having a physical entity. But because the axis stops (without labels) at the y axis and doesn't carry past it (deeper (except for the tick mark at y=0) into the negative quadrant, it is obvious and better to see the point and it's error bars. The range is quite significant and so these error bars are hard enough to see. Here are some of the consequences I see when I plot the axis at (0,0) * Point gets a little hard to see and the error bar almost entirely disapeers * Error bar caps look y axis ticks which make the y axis messy and the error bar almost invisible. Increasing thickness of error bars looks very messy. * Increasing point size could encompass the error bars in this particular context.<issue_comment>username_1: The point of a figure is to convey *meaning* and to do so in as easily understandable a way as possible. So, if you have a case where you think that the result is *clearer* if you offset the x-axis slightly, then do it. The guiding line should be: Does it help the reader understand better what you think the important point is than any other way of presenting the data? Upvotes: 4 [selected_answer]<issue_comment>username_2: > > Is it valid to pad an axis slightly in order to enhance point > visibility? > > > Yes. For example, in the textbook by <NAME>, *Introductory Statistics*, *all* of the examples including data in the 0-class are treated this way: [![TV sets per household histogram](https://i.stack.imgur.com/v9xGU.png)](https://i.stack.imgur.com/v9xGU.png) > > Note the symbol // on the horizontal axes in Figs. 2.3(a) and (b). > This symbol indicates that the zero point on that axis is not in its > usual position at the intersection of the horizontal and vertical > axes. Whenever any such modification is made, whether on the > horizontal or vertical axis, the symbol // or some similar symbol > should be used to indicate that fact. (Weiss, 8E, Sec. 2.3) > > > While this is used throughout on the horizontal axis, what the reader is warned against is modifying the location of the zero on the *vertical* axis: > > Figure 2.12(a) is an example of a **truncated graph** because the > vertical axis, which should start at 0%, starts at 4% instead. Thus > the part of the graph from 0% to 4% has been cut off, or truncated. > This truncation causes the bars to be out of proportion and hence > creates a misleading impression... Truncated graphs have long been a > target of statisticians, and many statistics books warn against their > use. (Weiss, 8E, Sec. 2.5) > > > Upvotes: 3
2015/12/31
2,018
8,527
<issue_start>username_0: I'm a sophomore at a Canadian university studying mathematics. Since near the end of first year, I've thought about attending graduate school for some mathematics field (this could also be because I don't know what else I can do with my degree/knowledge). I'm thinking about it now, however, and I've noticed (through painstakingly unfortunate experiences and unnecessary, overly-ambitious classes this year) that perhaps mathematics may not be my life (although I do enjoy reading on it, studying/looking into different aspects of the field, and even if I don't do well on an assignment, test, or class altogether, I'm not typically turned off of that subject if it is maths-related). Thus, I'm thinking about this as someone who could potentially be doing this in grad school. Right now, I have the freedom to take other, non-maths courses (I'm minoring in film, for example). Will this freedom continue in grad school? As well, I'm fortunate enough to be in an undergrad program that encourages cross-faculty research. Thus, if I wanted to look at mathematics from a film point of view (or vice versa) for a research project or thesis, it would be encouraged. However, from what I'm seeing from my grad school research, you are typically in grad school for one specific field and do research in that topic for your entire degree (which, I understand is usually less than the 3/4 years of undergrad study). So I guess my question is, what do you do if you're in grad school but grow weary of your research? I'm sure you can't just up and leave, but what if you want to research something that isn't offered by a grad institution, or want to merge two topics together? How does the process work then? Or what if I wanted to do research with the philosophy department on a mathematical topic (e.g. something like logic)? How would that work? Do grad institutions give a degree of freedom to their students, or is it all about the adviser and what they want? And if the latter is the case, what do you do if you don't want that? Do you have to just suck it up and do it anyway?<issue_comment>username_1: I'll give a few suggestions here because I did go through an M.A. program in mathematics and statistics in a U.S. state university that borders Canada, and it sounds like your interests are very similar to what mine were. First, to my mind, sophomore year sounds very early to be planning out grad school (although I could be wrong). I'd say that the courses that informed me what a math major was really about didn't even start until Junior year. Have you gone though proofs courses, abstract algebra, analysis at this point? That's the real preparation for graduate work in math. It's definitely true that some people who were good at high-school calculation-style courses get into college math programs, find them to be a different world entirely, and opt for some other program. So, first suggestion: Take some more time and decide if the real math discipline is really for you. Second, I was likewise able to pursue a whole lot of other interests when I was an undergraduate -- including student film projects and a double-major in philosophy. Through all that, the math program seemed natural, attractive, and fairly easy (with A's throughout and a top senior award ). But when I transitioned to an M.A. program in math -- at the same university -- it seemed like a light-year advancement in difficulty. It was, frankly, nigh-overwhelming, and I basically had to give up almost all my other endeavors just to stay somewhat afloat. Part of the rationale of the graduate program is an expectation of complete focus and dedication to that discipline, as the start of a lifelong commitment. At least that's how it seemed to me; your mileage may vary. Finally, I would be somewhat surprised if a math graduate program would have any tradition of doing interdisciplinary work, particularly with soft subject areas like film or philosophy. Perhaps it would be easier to work in one those other disciplines and port in your knowledge and interest in mathematics. If someone had different knowledge on that score, I'd be very interested in hearing about it. Upvotes: 2 <issue_comment>username_2: First of all, as @ff524 points out there is a world of difference between a master's program and a PhD program. If you don't understand the difference, then you know too little about graduate school to be thinking about it any meaningful way. In the US a math master's degree takes about 1-2 years and is not a prerequisite for a math PhD program. (In fact, although some students can use a strong performance in a master's degree to "launder" their academic record and gain admittance to a solid PhD program, in most cases enrolling in a master's program signals a much smaller commitment than enrolling in a PhD program.) In Canada it seems to take a bit longer: 2-3 years, and up until recently it was viewed as a prerequisite to getting a PhD. (Things seem to be changing a bit, although still probably the majority of Canadian math PhD students have done separate master's degrees, which adds to the total time to get a PhD). Getting a PhD in either Canada or the US takes 4-7 years; I believe 5 years is the most frequent number though the mean is higher. Moreover a PhD is the required certification for a future academic career in mathematics (it used to be that a master's degree let you teach at 2 year colleges, but that is rapidly ceasing to be the case in the current job market). The purpose of a master's degree in mathematics is much more fluid...or it may serve no purpose at all. You should know that the lowest common denominator for master's degrees in math is indeed very low: it is well known to US graduate programs that an undergraduate degree at a top institution with good grades is usually better than a master's degree at a mediocre institution. It is fairly easy to "weather through" a master's degree in mathematics, even if your level of interest wanes in the middle (because there need not be much middle, in part). If you have some specific career goal for which a master's degree in mathematics would be advantageous -- e.g. high school teaching -- then not being unilaterally devoted to mathematics is not a serious obstruction. By contrast, if you are not interested in mathematics almost to the exclusion of other academic fields, then a PhD program in mathematics is probably not for you. (Certain branches of applied mathematics are exceptions to this, but the vast majority of students who do a PhD in a "mathematics department" are not *that* applied.) > > Right now, I have the freedom to take other, non-maths courses (I'm minoring in film, for example). Will this freedom continue in grad school? > > > Technically yes -- graduate students are allowed to enroll for courses in any department just like undergraduates. In practice: the freedom will be severely restricted. If as a graduate student you take a course outside of your department then you should expect to be asked to explain its relevance to your mathematical course of study. Thus it is relatively common to take language courses and courses in related fields (cs, physics, statistics...). You should probably not be taking "elective" courses -- e.g. if you took a course in film, it would have to be strongly skewed to the technical aspects of image production / reconstruction / whatever or you will probably find yourself having to defend doing something that interferes with your research. Moreover, theoretical mathematics (and *some* branches of applied mathematics) is one of the least interdisciplinary of academic fields. If you want to work on logic, then you may well find an expert advisor or co-advisor in the philosophy department -- but nevertheless you should expect your work to be at least 90% mathematics. There must be a few math PhD students who take more than a small handful of philosophy courses...but I have never met any. All in all: **you should not seriously consider a PhD program in mathematics unless (i) you are single-mindedly, passionately devoted to mathematics or (ii) you have a very specific career goal for which a PhD in mathematics is beneficial, and your plan has been vetted as reasonable by multiple trusted mentors**. If you like but don't love mathematics as you are completing your undergraduate degree, plan on continuing to read, learn and perhaps do mathematics after you graduate, at your own pace...as a hobby. Upvotes: 4
2015/12/31
2,380
10,392
<issue_start>username_0: I have recently got another citation, which brought my citation count to three. Even though I am proud of my three citations, others may find it funny or pitiful. So I am having doubts whether I should make my Google Scholar profile public and share my achievements with the world. Most profiles on Google Scholar I click on have over 100 citations. I will be lucky to get that many in 20 years. I think the decision to make the Google Scholar profile public depends on citation count + several other factors. The other factors that come to mind are: my age, number of publications, field, level of the institution I am affiliated with, etc. I am 30, a young economist still working on my big paper, I have a PhD from a low level university in Eastern Europe, presently working at a not so high level university in China, but I have aspirations to make it to an average American or Western European university. It seems to me that public Google Scholar profiles are for accomplished researchers and I should just wait for a decade or two. But what concerns me is that as I attempt to publish my next paper or apply for a grant or a job, the editor/reviewers/committee will bring up my profile on Scopus or some other portal, which will be missing some citations. In my case, Scopus shows two citations instead of three! I feel like someone must be laughing at this point, but I am sure even the best go through this stage in their careers. Does anyone have an advice for me and other scientists with less than 100 citations? When do they usually open up Google Scholar profiles in prestigious universities? In not-so-prestigious universities? On your faculty?<issue_comment>username_1: As someone who is currently on a search committee for two tenure track positions in mathematics, I can tell you that "hiding" your profile certainly won't help. In the job search process, potential employers are going to want to know about your research and its impact. It's common to check on Google Scholar, Math Reviews, Web of Science, and similar databases to see what impact a candidate's research has had. Many of the candidates for faculty positions that I've seen don't have public profiles, and in my opinion this makes them look weak in comparison with candidates who do. If you don't have a public profile then someone trying to evaluate your research who looks at Google Scholar will probably find one or two of your most cited papers or perhaps find nothing and assume that you have essentially no citations. Another possibility is that they'll find a confusing bunch of papers written by other authors whose names are similar to yours. Having a public profile ensures that they'll see all of your publications and citations. Even if there aren't many citations yet, this also shows that you're aware of the importance of documenting your research activity and its impact. Upvotes: 6 <issue_comment>username_2: I am also in math, and have also served on search committees, and have a quite different opinion than username_1, though I am in pure maths whereas Brian seems to be in applied math. Most mathematicians I know don't have public Google Scholar profiles, and I personally think there's not much reason to if you put your papers on the arXiv or your webpage. **Revised:** There are some reasons for a mathematician to have a public profile. It can provide a minor convenience for people following your work and GooSchol gives you some features for making your profile public as mentioned in the comments. Also, it doesn't require a subscription like MathSciNet, and is easier than making a webpage with all your papers. However, I personally don't have one because: (1) I don't like advertising an (inaccurate) citation count and don't want to encourage the use of citation metrics to compare scholars, and (2) I don't have control over inaccuracies in my profile (e.g., different papers of mine counted as the same, and a paper that's not mine being auto-merged with one of mine). (This said, if it starts becoming standard in my field and supplants MathSciNet and the arXiv, I probably would make public profile.) While a high citation count is impressive on first glance (and I do look at citations for some job candidates, on MathSciNet or Google Scholar), I don't put much stock in it when evaluating research, especially for younger researchers--in fact I think it is wrong to focus on, one reason being citation count tendencies are heavily subarea dependent. However, my opinion is probably area dependent, and I probably feel this way because most of my colleagues don't have public profiles and we aren't crazy about citation counts like people are in some fields. You may want to ask a separate question about how citation counts are viewed for grant/job applications in your specific field, as I think this is highly dependent on research area. **UPDATE (2019):** Over the past couple of years, I think Google Scholar has been getting more popular in pure math also, and I decided to make my profile public. However, I did this with somewhat mixed feelings (for the above reasons), and think it is still okay not to make your profile public if you do not want to. (Certainly not all good job candidates have one.) However, if your citation counts on Scholar are impressive, it can certainly help you on the job market as some people will look to see if you have a public profile. Upvotes: 3 <issue_comment>username_3: Congratulations on getting your third citation -- I don't think it's silly or laughable at all to be happy about something like that (I would be similarly happy to discover my citation count increased by 50% overnight ;-)). But let me offer a bit of perspective: I'm also in math, and in my area at least citations are not an especially important metric, and in particular for junior researchers (say, at the postdoc or assistant professor levels) I don't remember ever either knowing or caring how many citations a job candidate (or someone else I was interested in for professional reasons) has. For more senior researchers this becomes a bit more important (meaning a bit more important than not at all, but still not particularly important). As for a Google Scholar profile, that is another thing that in math **nobody cares about**, at least not *per se*. I have never looked up anyone's Google Scholar profile (except maybe if it came up as the top search result in a google search), and didn't even know until very recently that they exist; similarly, all that matters to anyone I know who is evaluating a job candidate is that there is an easy way to access the candidate's CV, list of publications, and downloadable files of their papers. If you can achieve that with a Google Scholar profile, great, but any other way, such as a personal web page, or uploading your papers to arXiv (or an equivalent thing for econ papers), would work just as well. In fact, for those with the patience and technical ability to set one up, a personal web page is in my opinion the best way since it gives you complete control over how you communicate information about yourself to the world. Now, I understand that you're in economics and things may work a bit differently there. In particular, with economics being a social science, I would imagine that it may be a lot more important for you to show that your research is part of a larger dialogue within the research community, and citations would be one way to show this. Nonetheless, I think it's important to keep in mind that your goal should be to do the best research, not to do the research that gets the most citations (and of course those two goals are sometimes, but not always, aligned). And when you cultivate your "brand" by deciding whether to set up a Google Scholar profile or considering any other such question, I suggest focusing on the goal of communicating to others in the best way why your research is important, not on a superficial question like whether you will appear to have 2 or 3 citations. Just my 2 cents, which you should take with a grain of salt because I'm not very familiar your area. Upvotes: 3 <issue_comment>username_4: As a mathematics postdoc, I really appreciate people having public Google scholar profiles. This is not for hiring purposes, but rather, they provide a convenient way of checking the recent and most cited work of the person, or, if they are junior, all of their work. Arxiv might give two thirds of the benefits, if the researcher is a mathematician (or a physicist). Further, I can, with fair ease see who has cited their papers and maybe see their coauthors in the sidebar. Scholar is also pretty good at finding freely available PDFs or previews of books, which is nice, and also takes into account institutional access to some extent. Even more conveniently, I can follow the researcher's work, the work of others citing them, or even related research if their work is very interesting to me. I can do all of this from home or work without extra effort, since the service is not behind a paywall. I would recommend having a Scholar profile when you have a publication, to make it easier for others to find out what you are doing. Maybe clean it once a while to merge duplicates or to remove research that is not your own, especially if your research is in several repositories or there are many people with the same family name and initial(s). Maybe add coauthors, if you want to, but that is extra service. Upvotes: 3 <issue_comment>username_5: Zero. The purpose of a profile is not to boast about your citations. It is to make it easy for people to find your publications using your name. All you need is one publication, and then you should have a Google Scholar profile. Upvotes: 2 <issue_comment>username_6: I think it is perfectly reasonable to make your Google Scholar profile public irrespective of the number of citations. The profile still allows people to see your work and know what you have written about. Three citations is far less than most practicing academics, but it is still an impressive scholarly accomplishment, so you are right to be proud of that accomplishment. If you are brave enough to share your profile with the world with a relatively modest number of citations (by academic standards), that will make others less reluctant to also share their own profiles. Upvotes: 2
2015/12/31
715
2,881
<issue_start>username_0: I use a long term (e.g. "artificial neural network") more than once in the abstract of a paper. Is it best to introduce the abbreviation (e.g. "ANN") after the first occurrence, or repeat the non-abbreviated term each time?<issue_comment>username_1: Some journals’ policies do not let the author use abbreviations within the abstract, but they are not frequent, at least in my research realm. ***Abbreviations are supposed to be invented and utilized to handle this very issue: repetition*** It is worth mentioning that, in some cases, the abbreviated statement is well-known enough to the corresponding community. So, one could contend that the assertion of the expanded form might be redundant. As an instance, in the territory of robotic systems, the *AI* is always the famous representative of *artificial intelligence* and one does not explain about it, **even within the abstract**, typically. However, you better stick to the official rule, which says: > > Introduce the abbreviation with its corresponding long phrase, **once**. Then, use the abbreviation, where ever you need. > > > Upvotes: 3 <issue_comment>username_2: It is almost certainly not "best" to introduce acronyms/abbreviate terms like "artificial neural network" in the abstract. I can think of two reasons to use acronyms/abbreviations in a manuscript. The first is to save words/space and the second is to improve readability. While using abbreviations is a quick way to save a few words, generally you will be better off spending more time (assuming the deadline is not pressing) thinking about why you have hit the word limit. Cutting words is often better done by saying things more concisely or leaving out unneeded details. In regards to readability, the abstract is different than the body of the manuscript. While in both you need to consider readability by both experts and non-experts, the percentage of non-experts reading the abstract is higher than the percentage reading the body. Therefore, the readability of the abstract should be geared towards a non-expert. What this means is that while acronyms like ANN may help the expert reader, they will not help the non-expert. On the other hand, acronyms like NASA are more readable to the non-expert than National Aeronautics and Space Administration. Abstract are tricky. For example, APA 5 style used to say that abstracts had to be self contained. That meant you had to introduce acronyms in the abstract and then again in the body. In APA 6, this has been dropped (cf. [this blog post](http://blog.apastyle.org/apastyle/2011/11/brevity-is-the-soul-of-lingerie-and-abstracts.html)). I still go with introduce the acronym on first use in the abstract and then again on first use in the body. I also introduce acronyms on first use in figure captions (not sure that is APA style or Strongbad style). Upvotes: 4
2015/12/31
923
3,902
<issue_start>username_0: We (including my teacher) designed a new algorithm in computer science and my teacher proved that it is good enough about its time complexity, but I'm not sure about that and worry that the proof be wrong. Could I submit it to a journal?<issue_comment>username_1: My first hypothesis is that all the authors have done their best to have correct proofs, and that the question was asked with honesty to get advice, in order to be published in a serious journal. This being said, it is not uncommon "not to be sure", or "to be afraid proofs might be wrong". Indeed, the history of science is paved with significant published papers containing flawed proofs, sometimes quite small. This happens even in mathematics, as detailed in [Widely accepted mathematical results that were later shown wrong?](https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong) If your proof is ok, great. If not, and the algorithm is interesting enough, options are: * a reviewer detects it, and rejects: you would work some more, * a reviewer detects it, and suggests corrections: you can publish or resubmit, possibly adding the reviewer to the author list (if the contribution is significant enough), or at least to the acknowledgement part, * nobody detects it, but you does afterward: submit a correction. If you are aware that the proof is wrong, you ought to correct it before submitting it. Meanwile, there exits a huge business of poorly reviewed conferences and journals, that would publish anything with some fees, and a huge amount of retracted papers recently. This undermines the work of honest scholars. Upvotes: 1 <issue_comment>username_2: **It is your responsibility, not the referees', to make sure that a published proof is correct.** The reviewers should make sure that the results have some merit; some check proofs line-by-line, and some merely check that the result is plausible and that the proof methodology looks sound. Sometimes errors slip by. Nevertheless, if the result later turns out wrong, your reputation is at stake, not the one of the reviewers. So speak with your advisor and make sure that your proof is correct *before* you submit. It might be the case that the proof is correct and you just haven't grasped all the details because you have less experience. Or it might be the case that there is a flaw to address and correct, in which case you should act before submission. It is not a good research mentality to think "We only have to fool convince one or two referees", in my opinion. Upvotes: 6 [selected_answer]<issue_comment>username_3: You should not submit if you are not confident the proof is right. Why not trying to boost that confidence instead? Two strategies I can think about to do that: * *Give a talk* in your department where you explain the proof to your peers and to more experienced researchers. It doesn't need to be anything formal, it could be a seminar, a research group meeting, or even a reading group. * *Publish a preprint* instead (for example on the arXiv), or write a blog post. If you are lucky, another researcher who is interested in the result will comment on it. In general, I am suggesting that you discuss the result with other people before you publish it. It may be that you will catch the mistakes (if any) yourself, just by getting the chance of discussing it with other people. EDIT: As pointed out in the comments, the second suggestion applies or not according to the OP's level of confidence. Of course, I didn't mean people should use the arXiv users as oracles that check the correctness of their proofs. It is nevertheless, at least in my opinion, an intermediary step before publishing to a journal. And anyway, arXiv was just an example, what I really suggested is to publish it somewhere online in order to give it more visibility. Upvotes: 4
2015/12/31
1,985
8,302
<issue_start>username_0: While giving a presentation there is one person in the audience (**X** a PhD student and a friend) asking all sorts of questions, even unrelated ones. They are also eager to answer questions of others while just in the audience. **X** attends almost every talk in the department (info from colleagues) and shoots out his questions. During my first few weeks I consider this as good academic practice but had to change my opinion when the same occurred during my talk, with about 7-8 questions asked during a 30 minute talk; some of them were blatantly off topic (like asking why I used this numerical method over the other, which I haven't even heard of). We attended a thesis defense in our department in which the protocol suggests that the defense committee should ask questions first in the time allotted to questions. **X** couldn't wait, asks during the talk and this annoyed the committee, one of the stood up and asked **X** to stop! I talked with **X**, and they said they are doing this to push the presenter to their limit and described the thesis committee as a *bunch of idiots.* **X** said they were very shy in their school years and worked really hard to reach this level of confidence. **My analysis** Although questions are welcome during the presentation, **X** seems to ask questions for asking sake. They used to ask questions to compare a certain method or phenomenon with their line of research which the presenter has never even heard of. How to deal with him if this continues (sure it will!) during one of my presentation? I don't want to offend, as they are one of my few mates on campus. **Side Note**: ***X's questions are not [dumb questions](https://academia.stackexchange.com/questions/17022/useful-strategies-for-answering-dumb-questions-in-a-talk)*** in reference to some comments. They are sometimes off-topic, sometimes seeking explanation on a particular terminology(during the talk). I feel the questions are being asked simply for the sake of asking a question.<issue_comment>username_1: You are not obligated to accept somebody disrupting your talks, particularly when the questions are off-topic and when much of the audience appears to be disapproving. The basic solution is to act as a moderator for your own talk. * For questions during your talk that you find disruptive, a simple solution is simply to say, in a calm and gentle voice, something like: > > I'm going to defer that question for now. We can come back to it at the end. > > > * During a question period, if a person is dominating the discussion, you can say something like: > > I think some other people may want to ask questions as well, so I'm going to ask you to hold your questions for a bit while we hear from others. > > > The key here is to be firm, gentle, and impartial. Your voice should make it clear that it's not about this particular questioner or about you being defensive, just about keeping on topic. After you say something like this, most people will leave the question and let you come back to it later. If the questioner does not, simply repeat yourself, remaining as calm and unruffled as you can, and then *move on*. If the person continues to be disruptive after that point, it will be clear that they are making an unusually large problem, and others are likely to help intervene. Upvotes: 7 [selected_answer]<issue_comment>username_2: My main suggestion is that you complain to your advisor or the professor running the seminar. Ask them to have a word with **X** and get him to tone down his aggressive questioning, both during your own presentation and during others'. As his peer you are not very well positioned to make such a request of him, but it's certainly the job of the professor in charge to help maintain a healthy and productive atmosphere during talks. A couple of other thoughts: 1. From your description of **X** (describing the thesis committee as a bunch of idiots, making a nuisance of himself during presentation, etc.), I'm guessing he won't last long in the graduate program, so the problem will likely solve itself in due course, and maybe even sooner rather than later. 2. This is a borderline unethical idea, so use it with great caution and at your own risk, but I'm thinking that **X** just might benefit from being given a taste of his own medicine... Upvotes: 2 <issue_comment>username_3: [1] A little tip I picked up from boardroom presentations when I was in industry... Imagine you are a keynote speaker at a conference. Begin your presentation with the words "I will respond to any questions at the end of the presentation". If anybody, including X, attempts to ask a question, repeat the words "I will respond to questions at the end of the presentation". Create a final slide, after your summary, that says, simply 'Any questions?'. If his motivation is to disrupt, X will probably ask far fewer questions this way. [2] If X asks why you have chosen methodology Y in preference to methodology Z, think about the reasons you did use methodology Y. It is perfectly acceptable to answer that this is personal preference. If you haven't heard of methodology Z it is also fine to say that you are familiar with methodology Y but you may consider using methodology Z in the future, when you are more familiar with it. (Do not commit to using methodology Z - your way may be better). As the person marking your presentation, I would not penalise you for this unless methodology Z was mentioned in the assignment brief. [3] Some knowledge of Education Psychology would suggest that X's attempts to show off his expert knowledge and belittle other people by exposing the limitations of theirs are strongly suggestive of low self-esteem (or psychopathy but let's stick with the most likely scenario). The facts that he claims to be attempting to 'push the presenter to their limits' and to have lacked confidence in school strongly support this. These actions may not be intentionally malicious but they may make him feel that he is a comparatively strong student and deserves to be on the course. It appears that X is mistaking arrogance for confidence and if your friendship is strong enough, you may be able to, subtly, point out that his actions are not always interpreted by others as being assertive and confident but may be seen as arrogant or just plain annoying. Since this is a relatively new friendship, I appreciate this may be difficult, especially if you are both male and have been conditioned not to discuss such things. However, many people who lack self-esteem respond disproportionally to praise so you could try 'Parenting 101' - pile praise and compliments on X, or at least make sure he realises you are 'impressed' when he does things you approve of (but try not to be TOO obvious) and withhold comment if he does something that most people would deem to be inappropriate. If X is seeking attention and validation, ignoring his unwanted behaviour will gradually make those actions unfulfilling for him and he will stop. [4a] In conjunction with [1], it may be worth framing your answer as a question: you can respond to the first question with "Why do you ask that question?" If he answers, you are then in the rôle of questioner and have shifted the balance of power in the discussion. This will allow you take back control and regain your confidence. [4b] For the sake of completeness, I have seen a misogynistic office bully silenced by the following technique but I would NOT be inclined to use it with young people (and as a lecturer it would be inappropriate). As his peer, if you are feeling particularly evil, you could build from [4a] and make ever more in-depth enquiries until X reaches the limit of his knowledge, which will usually take 3-4 carefully considered questions. There is a delicate balance to be struck here: don't try to make him look stupid - apart from being morally wrong, that will just make you appear vindictive. This is a very risky strategy. You would probably only have to do it once but it may put an irreparable strain on your friendship. However, if X is as disruptive as you suggest, it may earn the respect of your class-mates and have you considered that X may be the very reason you have few other friends? Upvotes: 2
2015/12/31
1,480
6,135
<issue_start>username_0: My new Primary Investigator, who is well respected in her field, gets really nervous when talking to me for some reason. We are the same gender and both married so it's not a problem of that sort... This does not seem to happen with any other student, who she has a really good relationship with, but when talking to me, sometimes (not all the time) she really fumbles for words. This then makes me start to get nervous and our interactions sometimes become awkward. How should I improve our relationship so I can communicate effectively with my advisor?<issue_comment>username_1: I would advise you to simply ask about it, in a way that pre-supposes as little as possible. For example, you might say something like: > > Can I ask you about something I am feeling concerned about? I've feel like our interactions become very awkward and stumbling sometimes. When I see you talking with other people, however, I do not see this happening. I'm concerned about this, because I want to have a good advisor/advisee relationship and be able to communicate effectively with you. > > > Don't assume anything about her feelings---what you might be interpreting as feeling nervous might be something else entirely, even just a mannerism of your PI. You also may be incorrect that this is unusual with you: people have strong observational biases and also you don't see your PI when she's alone with other people. Hopefully, the two of you can then have a short awkward conversation that will help resolve (or at least allow you to work effectively despite) your other awkwardness. Upvotes: 4 <issue_comment>username_2: It is impossible to hypothesize about an exact reason with such few information but I wish to make a few observations. > > Have you analyzed yourself thoroughly? > > > It may be that you are anxious yourself. It could make other people around you to feel uncomfortable. It may be that you are seeing the reflection of your anxiety on others. It makes a lot of difference for me when a professor greets us with a smile before starting a lecture, rather than being gloomy with eyes of a *dead fish.* Try to wish her everyday with a smile. Assuming you have already analyzed yourself, I agree with @username_1's answer, *but my advise is to converse by email* due to obvious reasons. May be you just need to *break a little ice* between you and her. Invite her to a coffee sometime, may be you could talk about your family, about anything other than your subject. **A personal side note**: May be its just me, but I sometime feel highly uncomfortable with piercing staring of some of my peers. I use this strategy of talking not looking into their eyes. It may be awkward but gets the job done. If you wish you could try talking by not making any eye contact to rule out if that is not the case. This is an entirely subjective observation and take it in that spirit. Upvotes: 3 <issue_comment>username_3: I recommend bringing it up explicitly only as a last resort. PIs don't, as a rule, like to delve into interpersonal psychological issues with advisees. It has the very real possibility of marking you as high-maintenance (focusing on feelings and personal stuff rather than science) and could exacerbate future awkwardness. Before even seriously *considering* bringing it up, I'd make sure of the following: 1. This has been going on for a long time (a couple of months, at least). That is, give the relationship time to settle into an equilibrium. It could be early relationship weirdness and it will just go away. 2. You have introspected honestly about the causes, and worked to eliminate any obvious causes. One thing that strikes me about the OP is the lack of any consideration of alternative hypotheses, causes, or remedies. It strikes me as a bit casual and unreflective diagnosis, frankly. This may just be the way it was written, I realize, but a lot more naunce is called for. 3. You have gotten confirmation from people you trust (not just friends) that she is indeed acting weird. The internet is good, but people that know both of you, and can give discreet and objective advice, would be even better. Discretion is important here: don't start trash-talking behind her back if she is your advisor. :) 4. It is interfering with your ability to do good science together, and with your mentor-mentee relationship in a way you find unacceptable. After all, sometimes advisers are just awkward, but they still give great professional and scientific counsel. This is actually very common, if not ideal. If those four conditions are met, then it *might* not be a mistake to talk to her. The PI isn't a friend primarily, but the boss, so it's important to act accordingly. My hunch though is that if you really take the time to think through the four conditions I mentioned, you will largely resolve the problem. That's my experience as both a advisee and an adviser for many students over the years. Upvotes: 3 <issue_comment>username_4: Could it be that the PI feels intimidated because you're really smart or knowledgeable? Even if you're "just" a student and she's a PI, she might be suffering from a bit of the "impostor syndrome". Anyway, I suggest approaching this step-by-step. It might be something you do unconsciously, something that wouldn't bother most people. So let's rule that out. Explain the situation to a couple of close friends (people you can trust to tell you the truth even if it might hurt your feelings) and ask them if they can think of any reason why someone might act flustered around you. It might be something specific to the way the two of you interact, or it might even be your imagination. Perhaps you could talk to another student about this; ask if they've noticed the same thing and if they have any idea why this is happening. Finally, it could be something specific to the PI. Maybe she is attracted to you, or you remind her of someone from her past. So it might be time to follow username_1's suggestion of asking her about it. However, depending on what the problem is, this could make things more uncomfortable. Upvotes: 2
2015/12/31
1,082
4,520
<issue_start>username_0: Also, if an undergrad does research during the school year, are they expected to be as productive as someone working during the summer? Do professors calibrate their opinions of students to account for how many hours a week they've said they could commit?<issue_comment>username_1: I think the main question is a false dichotomy. Most students don't have a choice between *either* doing research over the summer *or* doing it during the year. As long as it doesn't interfere with your grades or other activities, more research is always a good thing, no matter when it happens. That said, in my experience you should simply have a conversation with your professor (or direct supervisor, who will often be a grad student or post-doc) about time commitment and expectations. No one expects you to do the same amount of work in 10-15 hrs/week that you might do in 30 hrs/week, and no one expects you to work full-time while taking a full course load. As with many things in life, just make sure to communicate about expectations. Upvotes: 4 [selected_answer]<issue_comment>username_2: "Yes" The somewhat flippant answer is that, if possible and blessed with the choice, you should do *both*. Gaining more experience is, generally speaking, a good thing. Now more specifically, there are some good sides and bad sides to both: * Summer: Lots of time not dedicated to doing other things means, well, lots of time you can spend doing research. For many fields, experiments require large *chunks* of time, and it can be easier to find those in the summer, when you don't have to worry about making it to your next French class or whatnot. Additionally, there are lots of summer-specific research opportunities, ranging from field work experience to REUs, which may mean more funding, and which often mean people are "used" to having undergrad researchers around in the summer months. * School Year: In some disciplines, summer is also very busy. For example, one of my field's major conferences is in the summer, meaning I'll be both gone, and distracted before and after. For some fields, it's also prime field-work season, which means if you're in a lab or a theoretician, things might get a bit lonely. Similarly, people do like taking vacations in the summer. It's also often easier to work in "Research for Class Credit" during the school year. > > Also, if an undergrad does research during the school year, are they > expected to be as productive as someone working during the summer? > > > Not if their supervisor is being reasonable. Generally speaking, summers are fairly unencumbered for undergraduates, in a way the school year isn't. If someone is balancing classes, exams, etc. it's fairly unrealistic to expect them to be as productive as they would be in the summer, unless their summer hours are *extremely* limited (due to say...funding). > > Do professors calibrate their opinions of students to account for how > many hours a week they've said they could commit? > > > Again, assuming they're being reasonable, yes. Upvotes: 3 <issue_comment>username_3: Are you planning on doing research for a professor or independent research? If the former, it's best to do what's most convenient for the professor. I've found that it can be difficult to do supervised research over summer because everyone really feels like it should be break. On the other hand, professors usually have a lot of other commitments that they've planned to cram into the summer, and they might end up not giving you as much feedback as you'd like. You should really talk to the professor to work it out. I've noticed that professors will admit to being incredibly busy but often won't cut students slack unless something out of the ordinary has happened. But this is because (often times) they've hired a student to help precisely because they're too busy to do it. If they cut you slack, then their work won't get done. If it's independent research - even if you're doing it for course credit - then I only recommend doing it during the school year if you're otherwise taking a relatively easy semester. Also make sure to plan to give yourself a break around midterms and final. If you want to write a final paper, get it done at least two weeks before finals. I've found that it's really best to do independent research during the summer, but that's only true if you're incredibly self-motivated and self-disciplined (though really you need those traits to do research at all). Upvotes: 0
2016/01/01
388
1,630
<issue_start>username_0: How to share data accompanying a research paper? I know there exist some places to share datasets, but sometimes I would like to share data that are not data set (e.g. experiment parameters, results, etc.), and add a pointer in the research paper to the online location of the data. The data size typically range from a few KB to a few MB, and the license is not an issue.<issue_comment>username_1: The typical way to share a moderate-sized set of data accompanying a research paper is to attach it to a journal article as supplementary information. For conference papers, however, there is often no option for adding supplementary information. My recommendation for this case is to simply wait and attach to the extended version that is ultimately published as as the final archival form in a journal. Upvotes: 2 <issue_comment>username_2: > > e.g. experiment parameters, results, etc. > > > In the past, I have shared this kind of data from a conference paper in any of a few ways: * Create an extended tech report version of the conference paper, that includes all that data in appendices. Refer to the tech report in the conference paper. * Put the data on a lab wiki, and include a link in the conference paper. * Put the data in a Github repository, and include a link in the conference paper. I choose between those based on the nature of the data, i.e. which format seems most natural for it. I've never used [Figshare](https://figshare.com/) for this, but that would be another idea, (if you wanted a DOI for it, for example.) You can put any kind of "research output" there. Upvotes: 3
2016/01/01
472
1,769
<issue_start>username_0: I have to write a review of an edited volume and wonder what the best approach would be to cite individual papers, to which I refer. The review is for a bulletin rather than a journal and the format is quite open. I want to give enough info that the reader can locate any chapters of interest but think that putting the full titles in the main body will take up too much space, whereas adding a reference/endnote for each chapter seems like overkill. There are 12 chapters overall and I plan to refer to most of them. The field is computer science.<issue_comment>username_1: Cite them normally. At least [BibTeX](http://www.bibtex.org) has a feature to refer to another entry (the book as a whole, in this case) in the entry for the individual article, giving something like 'A. N. Author, "Random ramblings", pages 111-123 in [reference-of-the-book]' in the bibliography. For techniques/tools for writing papers, you should perhaps ask at [TeX-LaTeX](http://tex.stackexchange.com). Upvotes: 1 <issue_comment>username_2: From [my answer](https://tex.stackexchange.com/questions/2515/how-to-cite-chapter-in-book/2518#2518) to a similar question, [How to cite chapter in book?](https://tex.stackexchange.com/questions/2515/how-to-cite-chapter-in-book), on tex.stackexchange.com: > > I think the best policy is never to talk of chapter numbers in the reflist at all, and move talk of chapter numbers, on the few occasions they are needed, to the citation in the main text. But of course you can't always choose. So, if you must use a particular Bibtex style that uses chapters, then include them in your \*.bib files, and avoid Bibtex styles that allow you to refer to chapter numbers in the reference list whenever you can. > > > Upvotes: 0
2016/01/01
705
3,248
<issue_start>username_0: Should mathematics PhD students work through all of the past exams, if, say, the department makes the last 20 years of exam questions available to students? Or should students mainly focus on the most recent years' exams instead? - and that perhaps one cannot do *all* of the exams, because it is almost not humanly possible. Just seeking general advice. Thanks.<issue_comment>username_1: Going through problems on old exams will give you some idea of the kinds of questions that are used in these exams, but the committees that write these exams will change over time. This means that very old exams might not be representative of what you'll be given this year. Even if you focus on more recent exams, the committee might have changed and you could well get an exam that looks very different from last year's exam. In any case, solving problems from previous exams should not be the only way in which you study. You should also take time to review the subject and consolidate your understanding of key concepts, definitions, formulas, and theorems. In my experience of grading these exams, I've seen that students who have difficulty typically fail for one of the following reasons: 1. Not knowing some required definition or theorem. If a question is of the form "Show that A has the Frobnitz property" and you don't know what the Frobnitz property is, you simply won't be able to answer that question. You can avoid this by having read broadly on the subject and by making sure that you have committed important definitions and theorems to memory. 2. Not having good problem solving skills. Although doing old comprehensive exam problems will certainly help with this, doing problems from many sources is likely to be just as helpful. The key here is to focus on solving hard problems that may involve several non-obvious steps rather than the trivial exercises that fill many textbooks. Keep in mind that with a textbook you've been given a lot of context- if the problem appears in the chapter on contour integration than you can assume that the problem involves contour integration. This won't be true of the problems on your comprehensive exam. 3. Poorly written solutions. Sloppy logic in the solution to a homework problem might get substantial partial credit in an undergraduate class. In a comprehensive exam this is much less likely to happen. You need to give precise and rigorous solutions that cover all aspects of the problem, and there is no room for arithmetic or algebra errors. The difficulty in preparing for this is that you need someone competent to critique your solutions. If you have a study partner, I would suggest that you pick some problems to solve separately and then grade each other's solutions critically. If you find that you're making seemingly minor mistakes but usually have the basic idea for a correct solution, this is an indication that you need to check your work more carefully. Upvotes: 3 <issue_comment>username_2: Getting all worked up trying to solve *all* problems will just use up a huge lot of time, much of which might be better used otherwise. Get really familiar with the areas covered. Particularly check on newish results that might be covered. Upvotes: 2
2016/01/02
740
3,265
<issue_start>username_0: Should one always write a cover letter when applying for faculty positions, even when not explicitly asked for in the job post? If it depends on the field and/or type of university, please specify. If it matters, I'm applying to Physics departments in universities which primarily emphasize research, which already require (at least) a CV, research statement, teaching statement, and letters of recommendation.<issue_comment>username_1: The cover letter is one place in your application where you can explain why you're interested in this particular position including both professional reasons ("I'd really like to work with your library's archive of original manuscripts by <NAME>") and personal reasons ("My in-laws happen to live in your small town and my spouse would like to move back.") Showing that you know something about the location, the the institution, and the particular position can be a helpful factor if you're on the borderline for an interview. If there's really nothing special that you want to say then a generic two sentence cover letter won't greatly hurt your chances. Upvotes: 2 <issue_comment>username_2: A cover letter can be your key to open you the door to that interview. If you are very interested on that position, you should invest some time on doing your homework. Check the research group (each member) website and recent publications. Try to find out something you find interesting, and try to connect it with something you did, like your master's project or something like that. Proactive candidates are always more interesting, so you could try to propose something you would like to try, about something they already did, or something you are curious about... For example, "I found those experiments on the paper XXX very interesting... did you try to reproduce them at different temperatures to observe the effect of... ?". You can use the cover letter to explain why they should hire you specifically, or as a very short sample of what they will get if they hire you :) Upvotes: 0 <issue_comment>username_3: **Short answer: yes, absolutely!** I'm in chemistry, but I have participated in a few physics searches. It's customary to include a letter for many of the reasons given in other answers. We want to know that you've taken the time to consider our department specifically and given some thought to how you'd fit, collaborators, etc. Otherwise with a generic cover letter, there's a nagging feeling that the candidate is either: 1. Blindly sending out masses of applications. 2. Applying to you, but your department is very low on their priority list. Neither is a good first impression. More importantly, I think is that it would look ***very*** strange to not have a cover letter. I have been on roughly a dozen searches, each with at least 80-100 candidates each. I cannot recall someone who didn't include at least a short cover letter. It really only needs to be 2-3 paragraphs, with maybe a few sentences on the specific department. I can sympathize that it's yet another hoop to jump through in the application process. But I strongly recommend including one, if only because you don't want to leave an odd impression when we get to your file. Good luck! Upvotes: 2
2016/01/02
2,253
8,813
<issue_start>username_0: The academic year starts somewhere in the middle of the normal calendar year in almost all schools. Even if the courses are divided into semesters, the Fall term is essentially considered to be the beginning of a new academic year. It is also true for the financial year in most places. 1. Why doesn't the academic year start in January? 2. If the reasons are so prevalent, then why didn't the normal calendar year start in the month of August, for example?<issue_comment>username_1: Basically because schooling is the inverse of farming. That is: Historically farming and feeding the family (and community) took precedence over all other considerations. This is primarily a job that takes place through the spring, summer, and fall -- with little or no activity possible in the winter. Therefore in most cultures the original calendar year starts in the winter and so spans one farming cycle, i.e., spring-summer-fall (in the original Roman calendar, the winter period didn't even have assigned months!). When schooling historically started to be of some value, it came secondary to farming, and occurred only as farming allows. So the winter is an excellent time for it, because no useful farming can be done. On the other hand, the harvest season, around August-September, is the busiest time for farming, and critical in traditional communities to have "all hands on deck" for that work, and thus schooling would not occur at that time. And therefore the main school schedule would begin after the fall harvest, and run through to the summer of the next year. In short, the fact that schooling and farming have an inverse schedule is not too surprising, because traditionally the former only occurred in the time not being used for the latter. Having grown up on a farm in the U.S. in the 1970's, I still experienced some tension between my family and school, because I would need to be out of school for about a week in September for a particular job that we were expected to do with our livestock at that time. Upvotes: 8 [selected_answer]<issue_comment>username_2: Apparently I can answer but not comment with such a new account. This is really more of a comment to username_1' great answer. Anyway, in my country, there's a concept of the "summer holidays" when kids otherwise going to school are expected to relax and/or do other non-school-related activities (such as summer camps - note that "winter camps", if any, are much rarer). This is basically June to August (July to August for students, whose classes end later). As far as I understand it, these holidays are in the summer (as opposed to winter) because the weather in summer is much more pleasant, and thus much more appropriate for relaxationary activities (such as country vacations, or traveling). And, yes, historically (and partly to this day), there's some farming-related stuff too (for the kids to help with at some of the summer camps; this is not much the case today, except in the especially rural areas, but was very popular as recently as several decades ago). Incidentally - something that did not come up in username_1' answer, or indeed, as I write it, anywhere eise in the comments - the calendar year does not necessarily start in January; in fact this seems to be a (historically speaking) rather new tradition even where it does. (I do not know where it originates; it might have had to do with Christmas.) In particular, the Hebrew (and therefore Israeli) calendar year starts in September, not much after the academic year. The Muslim calendar year doesn't really start at any seasonally defined point at all; except in Iran and Afghanistan, where it starts in March. More historically, the British calendar year started in March prior to 1752 (the financial year still does, though it's been moved to April by the change to Gregorian). The Russian calendar year started in September during the 16th and 17th centuries (and in March before that). Of course, as mentioned by vonbrand, the old Roman year started in March (from which the names of our autumn months originate). Other traditions have yet other starting dates for the calendar year; I've heard somewhere that for every day of the year there's a tradition somewhere that puts the New Year on that day. That much is probably not true, but there are definitely very many. Not many of them, save for the modern Gregorian one, are in January (the traditional Chinese new year almost is, but it's more often in early February); December seems slightly more common (mainly due to the winter solstice). Upvotes: 4 <issue_comment>username_3: In addition to the existing answers there would be a strong historical link (as least in the UK and Ireland) to the festival of Michaelmas ([The feast of St. Michael the Archangel](https://en.wikipedia.org/wiki/Michaelmas)) at the end of September. Many Universities such as Oxford and Cambridge still use the phrase [Michaelmas term](https://en.wikipedia.org/wiki/Michaelmas_term) to define the term from September to December. It is also the start of the year for the legal system in the [UK and Ireland](https://en.wikipedia.org/wiki/Michaelmas_term#The_legal_year) According to A History of the University of Cambridge: Volume 1, The University to 1546 By <NAME> Leader on p29 (can be found on [google books](https://books.google.ie/books?isbn=0521328829)) the Michaelmas academic term started from 9 October since the Middle ages. The [Historic UK website](http://www.historic-uk.com/CultureUK/michaelmas/) gives the following explanation as to why this time of year (late September/Early October) was also adopted for the academic year. > > There are traditionally four “quarter days” in a year (Lady Day (25th March), Midsummer (24th June), Michaelmas (29th September) and Christmas (25th December)). They are spaced three months apart, on religious festivals, usually close to the solstices or equinoxes. They were the four dates on which servants were hired, rents due or leases begun. It used to be said that harvest had to be completed by Michaelmas, almost like the marking of the end of the productive season and the beginning of the new cycle of farming. It was the time at which new servants were hired or land was exchanged and debts were paid. This is how it came to be for Michaelmas to be the time for electing magistrates and also the beginning of legal and university terms. > > > It would appear that the tradition of September/October is linked to early medieval traditions of demarcation of the year which were based more on religious festivals that also had a civil significances such as preparing accounts, signing leases etc. Here is a useful document on the [Organisation of the Academic year](http://eacea.ec.europa.eu/education/eurydice/documents/facts_and_figures/academic_calendar_EN.pdf) in EU states for 2014/15. Most countries follow a August-October start to the academic year. Upvotes: 2 <issue_comment>username_4: In Indonesia, there had been an academic calendar start on January (1966 to 1979). Daoed Joesoef (Minister of Education and Culture of Indonesia at that time) changed academic-calendar to start on July in 1979. One of the main reason was following other country academic calendar. Plenty of students who willing to continue their study in oversea had to wait for six month or more. So, the answer of your first question is: following the rest of the world. For the second question, Why not "normal" calendar started on August? It's complicated. British adopt January as a new calendar since 1751. (CMIIW) The Byzantine Empire used a year starting on 1 Sep, but they didn’t count years since the birth of Christ, instead they counted years since the creation of the world which they dated to 1 September 5509 B.C.E. (Source: <http://www.webexhibits.org/calendars/year-history.html>) August is named after Rome Emperor, Augustus since 8 BCE, and I think it's better then sextilish (prior name, the sixth month, but sound like sex-thing-ish). Upvotes: 1 <issue_comment>username_5: Just to add that there are at least two countries — [Malaysia](http://www.thestar.com.my/news/nation/2016/09/03/ministry-releases-school-term-dates-for-2017/) and [Singapore](https://www.moe.gov.sg/education/school-terms-and-holidays#pri-sec-sch-term-2017) — where the academic year (at least for pre-tertiary institutions) simply follows the calendar year and begins at the start of January. Incidentally, both are near the equator and so have no seasons, so this is perfectly consistent with the farming explanation given above (this is not to say that that explanation is necessarily correct). (Note though that the tertiary institutions in those countries do follow the more standard American/European academic calendars.) Upvotes: 1
2016/01/02
331
1,413
<issue_start>username_0: Six years ago, I published a review (survey) paper about a topic in my field. Since then the literature (and the topic) has evolved substantially, so I think it’d be worthwhile to update the review by writing an “update paper”. I can think of at least hundred papers published in the meantime that could be worth adding, and a few interesting points (new perspectives, etc.). However, I have never seen a paper revisiting a previously published review paper and updating it. My questions are: * Does such situation warrant a new publication? * Is it common to publish a review paper updating a previous review? Any examples? * How to title it?<issue_comment>username_1: Updated review papers are not particularly unusual, [as a Google Scholar search readily demonstrates](https://scholar.google.com/scholar?q=updated%20review%20paper). You should definitely explicitly declare the paper as an updated review, and it is probably best to send the paper to the same journal as before. If you can get in touch with the journal editors, they may even be able to expedite the review process, since they already know and have accepted the first version. Upvotes: 5 [selected_answer]<issue_comment>username_2: This is a stunningly common occurrence in medicine. See the Cochrane library for a myriad of examples. Many of the reviews in there have pre-planned revision dates as well. Upvotes: 3
2016/01/02
957
3,908
<issue_start>username_0: Due to some personal reasons, I didn't get to complete my semester end project. So I copied my friend's project and submitted it without major changes (I know I was a fool to do that). Now we are caught and my TA told us that both of our scores will be zero. I have begged the TA to give my friend his score, at least. I don't know what is going to happen. I am obviously going to fail this class, but I don't want my friend to fail it because of me. What should I do?<issue_comment>username_1: The only thing you could really do is **go talk to the professor** and confess. As far as your friend's grade goes, that is up to the professor. If you stole your friend's work then he certainly shouldn't be affected. If he was just helping you (which it doesn't sound like) then the professor might also take that into account. You should expect to get a zero but I have seen a situation where the professor gave the student another chance (e.g., extended the deadline and allowed the student to redo the assignment) and didn't penalize the person who originally did the work. Of course, you shouldn't expect any of these favorable outcomes since you admitted you are in the wrong but it is still good to always talk things through. Upvotes: 4 <issue_comment>username_2: Whether your friend deserves a grade or not depends whether he was a willing accomplice or an innocent victim. If he was a willing accomplice, then probably both of you failing the class is the best thing you can hope for (at least for most US universities, where there are consequences besides failing if the professor or TA chooses to report it). If he was a victim (he did not share his project with you with the intent that you could just copy it), then he certainly merits a grade. In any case, the proper thing to do is for you to arrange to meet the professor in person, explain the situation, and apologize. If your friend was a knowing accomplice, both you and your friend should go together. Then it is up to the professor. Upvotes: 6 <issue_comment>username_3: You don't give us enough info about your friend's role in it. If he knew/helped/whatever and didn't stop it then he will get (and deserves) the zero right along with you. If not then he may still get the zero, though he doesn't fully deserve it, and that would basically put you on the "worst friends ever" list for doing that behind his back. If the friend didn't know you were doing it then you need to be not only confessing that you cheated, but also confessing that you stole it from him without his knowledge or consent and making sure they know he was totally innocent all around. Then and only then is there a shot of him not getting a zero. Upvotes: 4 <issue_comment>username_4: *but I don't want my friend to fail it because of me* Glad to see you repent your action. If you are looking for a safe play, I can assure you there is none if your **TA** is sincere in their words. The ideal thing to do is to > > Submit the case before a competent authority (your professor in this case) and be truthful to the investigation. > > > If the premise is correct in the question, your friend will be granted with the grade he deserves. Bear in mind that there will be consequences for you, severity of which depends on the particular country or the university and your professor. Upvotes: 2 <issue_comment>username_5: Your friend isn't failing because of you; they are failing because they committed a serious breach of academic ethics in giving you their assignment to copy or even to use as a guide. By acting as your accomplice they are as much a party to your cheating as you are and, as such, they do not deserve any marks. You and your friend can do little but beg for leniency this time. In future, do not put your friends in the position of choosing between acting ethically and helping you out by cheating. Upvotes: 3
2016/01/02
2,592
11,130
<issue_start>username_0: I have been adopting [flipped classroom strategies](https://en.wikipedia.org/wiki/Flipped_classroom) (in upper-level chemistry classes ~20-30 students), but I often get feedback from students that they want me to "just go back to regular lectures" and that they perceive the flipped classes as more work. I think the results are positive for student learning in my classes, although I haven't done formal assessments. I also personally appreciate the change of pace and style. Previously students might have felt that I wasn't giving enough concrete examples and this definitely solves the issue, as well as making me more responsive to a particular class's needs. I am wondering about strategies to overcome student apprehension about active learning styles.<issue_comment>username_1: Here are some ideas from <NAME>, author of a publication by the National Research Council's Board on Science Education, *Reaching Students: What Research Says About Effective Instruction in Undergraduate Science and Engineering*. This comes from Chapter 6, "Overcoming Challenges", in the section "Helping Students Embrace New Ways of Learning and Teaching": > > * Make clear from the first day why these teaching strategies are effective, and be explicit about how they benefit students, and what > is expected of students. > * Show students evidence of how research-based strategies will help them learn and prepare for their future life. > * Use a variety of interesting learning activities. > * Encourage word-of-mouth among upper-level students who have already taken the course. > * Listen to students’ concerns and make changes to address legitimate ones. > * Make sure that grading and other policies are fair [e.g., group work]. > > > For details you can get a [free PDF download here](http://www.nap.edu/catalog/18687/reaching-students-what-research-says-about-effective-instruction-in-undergraduate) (note blue button in top right). Upvotes: 6 [selected_answer]<issue_comment>username_2: As a student who has experienced a flipped learning method and especially one which was done horribly, I'll throw my two cents in. The way the flipped classroom was executed at my school was to have the students read an interactive textbook online before classes (this course was finance 1) and come to class to do some practice problems. The strategy that my professor used, whatever that was, **failed horribly**. There was [petition at the end](https://www.change.org/p/undergraduate-business-programs-director-and-other-relevant-wlu-staff-bu-383-inspiring-students-to-stay-at-home) of the semester to remove the format signed by over 1000 661 students and the idea was pretty much scraped for the second semester. The following points are ones that I feel are important to having this work successfully. 1. Classroom size 2. Lecture material 3. Difficulty of quizzes and tests related to the amount of lecture material 4. Make it worth their money Firstly, it is incredibly important that you are present at all times especially if the classroom size is big. The students are hoping to come to your lecture to learn from *you* and to learn from your experiences in the subject. Therefore, being there and always lending a hand is very important. Furthermore, if the size of the classroom is too big then consider getting *many* TAs. The consequence of not always being present or not being helpful usually means that students stop coming to class because they feel it is not worth their time. I believe the classroom attendance steadily dropped to less than 15 - 20 people per class out of 100+ students across all sections. Secondly, if you don't want students to object to this teaching style then make sure the teaching material provided to students is sufficient to study from. If the course is technical then a couple of handouts will not be enough. Students need a concept to be explained a number of times in numerous ways using multiple examples. This is how they will study for quizzes, tests and exams. Furthermore, embed your personal experiences and neat tricks/tips into handouts. If you are going to ask students to learn from just Khan Academy videos or something similar then I can guarantee that students will hate it. A follow up point to this is the difficulty of the quizzes, tests and exams. If you don't want your students to grovel over the course then make sure your quizzes, tests and exams are representative of the material that you have handed out. One of the main reasons a petition was signed at our school was because of the results. Whether the fault was with the teaching style or the students, that is difficult to say. However, most if not all of the students blamed the teaching method. At the end of the day, most students care about their grades and they will blame the most recent change which in our case was the change in teaching style. Lastly, make it worth their money (time). It is important that you explain to students either via hardcore stats or some other method that this is worth the money they have paid for the course. In our case, most students felt that the professor was being simply lazy by not teaching and felt they were losing their money. I hope this was helpful! Good luck! EDIT: @JessicaB suggested that I didn't say explicitly where the professor went wrong so I would like to elaborate here. 1. The professor *did not* provide adequate learning materials. Finance 1 is heavily quantitative and therefore, most students require lots of examples and in-depth reasoning as to why things work. The strictly online textbook the professor used was not thorough enough to explain some of the problems students were having. Furthermore, the practice problems were not at all indicative of the midterm which was given. To draw an analogy, picture learning only how to solve equations of the form `ax+b` and then being on the midterm to solve equations of the form `ax^2 + bx + c`. 2. The professor made no effort to provide students assistance *even* in class (which is the entire point to the flipped classroom concept). The professor usually had one TA which he would, literally, point to whenever we had a question. He made no effort to help students and would simply stand in front of the class. This pretty much made most people believe (and no evidence to the contrary was seen) that he was lazy. 3. He did not have office hours. If the student was not comfortable asking questions during class then tough luck. He refused to answer questions via email or during office hours. Hopefully that is explicit enough on top of the points already provided before. Upvotes: 5 <issue_comment>username_3: I have taught a flipped graduate engineering class for two semesters. The average number of students is usually about 30. Based on my own informal questionnaire as well as the official end-of-semester student evaluations, I can say that not all but a significant majority of the students liked the flipped class. If you flip your class, three things are important: 1. Make sure you explain to your students what a flipped class is and why you flipped your class. 2. Make clear to your students what your expectations are. 3. Make sure that the student workload does not increase compared to a conventional class. That means that all the work they are expected to do prior to coming to a flipped class must be compensated by reduced or no work after a flipped class. What I did was post slides well before each week's lectures and explain to them that I expect them to have studied the material before coming to class and that I would not go over the material in class. I would spend the first 5 minutes answering focused, but not general, questions. (By general questions, I mean questions that clearly indicated that the student did not study the material.) The rest of the lecture was spent on me working through examples on the blackboard and/or by the students in groups and me checking up on them. Upvotes: 3 <issue_comment>username_4: > they perceive the flipped classes as more work. It might actually be more work. Rather than passively sit in class and listen to lectures, and then fill out some papers as homework, they need to actively engage in learning material at home, and then actively accomplish a goal in class. Also, skipping out a night doesn't just result in missing a homework assignment isn't just some lost points from homework. It means that they will have an unpleasant experience in a classroom as they accomplish nothing that they want to desire, and they get a bad grade. It will feel like a double-whammy, so they feel extra obligated to do the homework so that they are prepared. However, the point of the flipped class style wasn't necessarily to reduce work. If they spend more time on the material, and get hands-on experience, they may actually be learning more. And that, after all, is the real goal, isn't it? Upvotes: 3 <issue_comment>username_5: As a student who is currently enrolled in two flipped classrooms, here's my input. Your students may require extra motivation to complete the flipped classroom work, whether that motivation comes from tests or quizzes is up to you. Flipped classrooms are much more difficult in the sciences due to the fact that many examples may be needed and there may be exceptions to some rules (like the exceptions to the octet rule, which I was never taught),but with enough class time to do shortened lectures it should be fine. Some students just passively absorb information while others have to work for it, and from experience, those who passively learn have the hardest time adjusting. Good luck! Upvotes: 2 <issue_comment>username_6: I have taught a course on computability and complexity theory in our undergraduate programme in computer science several times since 1991. In 2013 I decided to stop lecturing and to "flip" the course with podcasts instead of lectures. [I co-authored a paper in 2014 about my experiences with this new strategy.](http://link.springer.com/chapter/10.1007%2F978-3-319-09635-3_25#page-1) Many students appreciated the flipped structure but a few reacted negatively for various reasons. One should never expect to *not* get such reactions from students; some students will probably never be satisfied. The real concern lies elsewhere: that by replacing lectures with podcasts, you spend less time with the students. The importance of the flipped classroom is that you can devote your time together with students to active learning. It is tempting to devote a lot of effort to creating polished presentations that are to replace lectures but the real challenge is that of thinking about how to encourage active learning. Overall, it is extremely important that students feel that they do not lose anything in this format – or rather, they should feel that this is a better way of learning. Therefore you will always have to explain why you have structured your teaching activities the way you have and how these activities are meant to help each student achieve the learning goals of the course. Upvotes: 2
2016/01/02
1,547
6,343
<issue_start>username_0: **Disclaimer** After I have read the title, I have understood that it might be misleading. I do not mean that being a teaching staff or administrative staff is somewhere to be stuck with. For starters, I want to be involved in the academia as a scientist, and scientists are not so much valued in my country. I am at a point in my life that I am too much concerned for my future. In fact, this is the only time period that I have that much concerns. I seek for any advice that can take me out of my downward spiral. My job conditions ----------------- * I am 26 years old. * I have finished my master's last year. * I have a full-time job in a university since November 2011. * I am also a PhD student in that university since February 2015. * Even though my job title is "research assistant", I do teaching for 8-10 hours a week and exam proctoring for 6-8 hours a week on the average. * Because of my teaching duty, I prepare quizzes/assignments and grade them as well, which approximately takes up to 5-6 hours per week. * Sometimes, I have extra administrative duties that might take up a full day or so. * It has been one year after I've finished my master's and yet I have no publications. * My thesis adviser also suffers from the administrative duties and has little to no time to carry out research with me. * There are no research groups where I work. Even though there are some people those want to carry out group studies, they are either too busy or their areas of interest are different than mine. My personal issues ------------------ * Despite the [question I have asked before](https://academia.stackexchange.com/questions/23150/who-should-pursue-a-ph-d-degree), now I do realize that I really enjoy research. By research, I am not talking about the [fun parts](http://explosm.net/comics/3557/), but struggling with *seemingly* most boring and useless parts. * I usually study for long hours as a block to be productive. For instance, if I start studying at 10PM, I finish at 3AM or so. However, my productivity drops when I divide my working hours into multiple periods. * My job does not let me study for a long time continuously. * I do not want to, and more importantly, am not capable of working in the industry. * I feel like I am inside a downward spiral; such that more time I spend without publications, the harder I gain scientific reputation around the world. * Since my life-goal is to be a scientist, I need to find a paid PhD position in a country that values scientists to be happy with my life and do my part to prepare a better future for my soon-to-be spouse. * When I do not gain enough reputation around the world, I probably will not be in the shortlists of the universities. * I do not have enough experience to carry out a research all by myself. Even if I do, it will probably end up as a trash. Conclusion ---------- * I have no hard proof that I am able to do research other than some first attempt technical reports and my supervisor's reference. Hence, I am naturally being turned down by most of the universities since I have no proof that I can do research. * I am afraid of being stuck in the country that I live in, and being a person that has no other choice than work as a full time college member full of administrative duties and no research practice. * If I finish my PhD in my current university, the above situation will be inevitable for me since I probably will have no publications and the same process will follow me through post-doc applications. * I have limited funds. Miscellaneous Points -------------------- * I have applied to a PhD position requiring a considerably low fee and have agreed with my potential supervisor on principle. However, that university requires some sort of paperwork to recognize my master's diploma. There is a chance that I don't obtain a recognition and miss the deadline for enrollment. * Most of the universities with paid PhD positions either require publications or reference from their own faculty members. Unfortunately, I have neither of them. My choices that I have thought of so far ---------------------------------------- * Applying for master's positions with scholarships and start all over again. * Quitting my job, dropping off of my PhD and ask for internships from various research groups across Europe, hoping to find a job. By doing this, I would be risking all my savings and coming back to where I start with nothing. * Applying for industry jobs, and submit applications to PhD positions without scholarships after I save some money. * Accepting the fact that I was born unlucky, and carry on with my life with below-the-average happiness. More on the country that I live in ---------------------------------- * It does not have a good reputation throughout the world. * In most of the universities, your publications are only "something that you do for yourself". As long as what you do does not attract potential students, the university prefers you to work harder for your administrative duties. * Even if you are capable of invent teleportation, and obviously so close inventing it the administrators will not appreciate this unless you do your *job*.<issue_comment>username_1: Concentrate on your research. If the Powers That Be consider you too valuable as a researcher, you *might* get away with little administrative responsibilities. But being a successful reserarcher means you'll end up managing your graduate students, research group, ... In a sense, it is inevitable. Best you can do is to avoid too boring tasks... Upvotes: 1 <issue_comment>username_2: I disagree with you that applying for a masters's program is like having to "start all over again." In the US, it isn't unusual to have two master's. I know many people who got a master's from one school and then another from their PhD school. Also, Research Assistants have very little obligations other than fulfilling their [small] course requirements. Again, this might be different in other countries but is typical in the US. You may be expected to be a Teaching Assistant prior to becoming a Research Assistant but it will be a lighter work load than what you have now. Taking this route is what I would expect from anyone who isn't coming from a top-tier university, so go for it! Upvotes: 3 [selected_answer]
2016/01/03
782
3,362
<issue_start>username_0: **Context**: I've applied for several tenure-track positions. I have one or two favorites, but I'd gladly embrace any of them, basically because I only applied to good universities, with good programs, in good cities. **Question**: Usually, they will just ignore the 'discarded' candidates or inform us of the decision? In my previous experience, I was informed, after I asked via e-mail, several months later. I understand that it would be a significant amount of work, most places have more than 250 applicants per process, but no information at all is kind of rude, isn't? I also know that the candidates that are selected for the interview are contacted, some universities give a tentative timeline, so you can infer if you were selected or not, but that's not the general case...<issue_comment>username_1: My own experience was that whether and when a response came appeared to be exceedingly arbitrary. Some places gave a rapid "no", others gave a "no" after many months (one after more than a year), and some never said anything at all. I have heard of similar experiences from other people. I have not been able to discern a pattern of which organizations are likely to fall into which category, and also know that different people seem to have different experiences with the same organization, presumably based on who is running the committee in a given year. Upvotes: 4 <issue_comment>username_2: The very best practice is to inform appplicants as soon as possible that they are no longer viable candidates. Thus, first round cuts would know quickly that they have not advanced to the second round. Whether or not this was ever common practice is debatable but it is undoubtedly becoming rarer. Some reasons include : * Increased number of applicants: it is easier to tell twenty people they didn't make the cut, but some positions now receive several hundred applicants and search chairs are not always provided the tools they need for mass notification. * Bureacriticization: some universities require that no one be notified until the final candidate be found, offered the job, and they accept the position. This can often be over a year past the initial job posting and is a stupid rule. Others require HR to do all communications, but HR can often not be entirely responsive or communication can break down. * There is also variance caused by the degree of conscientious and amount of free time that the search chair has. Mid-career faculty often have absolutely no time due to administrative overload. Almost retired senior faculty have time but may have forgotten what it is like to be on the job market. One crowd sourced solution is the job wikis that have sprung up. At least you'll know if and when the medium-round candidates have gotten telephone interviews (or the short list candidates have been invited to campus) and you can presume you're out of the running if you haven't. Upvotes: 4 [selected_answer]<issue_comment>username_3: In France, for mathematics, there is a semi-officious [website](http://postes.smai.emath.fr/2016/index.php) where results of almost all hiring committee appear the day the decision is taken (short lists, then rankings). It is incredibly useful to candidate, and to committee which need not e-mail each applicant individually to have the information reach them. Upvotes: 1
2016/01/03
4,693
18,966
<issue_start>username_0: I realize why plagiarism is morally wrong and punishable. This is not a question about why it is wrong. This question is about why plagiarism is dealt with so harshly compared to other violations. This might be because I'm uninformed but [as per this question](https://academia.stackexchange.com/questions/24055/i-plagiarised-what-punishment-can-i-expect) and what I've often read plagiarism can easily mean being expelled or suspended. Here's a short list of offenses that typically get lighter penalties: * Stealing from another student * Doing illicit drugs * At my university (NYU, where discipline is handled through the housing staff), physically hitting someone would result in a combination of warnings/sanctions/being moved with the harshest outcome being kicked out of housing. This is definitely an odd system, but I wouldn't be surprised if other universities had a similar imbalance between the punishments for violence vs. plagiarism. I'm sure you could add a lot more to this list. Plagiarism is essentially fraud + stealing, but I've always found it strange that the gut reaction many people have towards it seems to be worse than for getting punched in the face. We definitely don't expel people the first, second, or even third time they get in a fight.<issue_comment>username_1: Colleges, above all, are institutes of higher education, and the standing of each college in academia hinges upon the perceived academic rigor and integrity of the said college. Since plagiarism, compared to the other offences, is especially damaging to the quality and image of a college, it is dealt with most harshly. Upvotes: 3 <issue_comment>username_2: An important part of the answer to your question is lurking in the "can" in your statement "plagiarism can easily mean being expelled or suspended." In fact, a first offense in plagiarism is likely to result in warnings, zero marks, and/or failure of a class rather than directly in the expulsion of a student, except for particularly egregious violations (e.g., plagiarizing one's thesis). Note, however, that particularly egregious theft or violence can get a student kicked out for a first offense as well (drugs aren't as good a comparison because many are dubious about considering them a significant offense in the first place). So why is plagiarism considered an offense on the same scale as violence against another student? Like violence, it strikes at the foundation of the entire academic and scientific enterprise. The foundation of academia is production and dissemination of knowledge. Plagiarism undermines both, particularly since discovering one instance of plagiarism can cast doubt on all of a student's other work as well---is it original, or have they merely failed to yet find the source from which it was stolen? Likewise, since it is such a foundational and corrosive problem, an institution can be badly damaged by tolerating it or by gaining a reputation for tolerating it. Thus, the zero tolerance policies and the potentially draconian punishments: one serious case of plagiarism can cast doubt on the entire history of a student's work, and tolerating plagiarism can form an existential threat to an academic or scientific institution. Upvotes: 6 <issue_comment>username_3: One key difference between plagiarism and violence is that plagiarism is a specifically academic offense, while violence is already handled by the legal system. If a violent incident is sufficiently serious, it can and should be dealt with in court. This means university rules only need to deal with cases in which the people involved prefer not to take legal action, and they can leave more serious cases to the legal system. In particular, the university rules are typically geared towards the less serious end, since those are the only cases they expect to handle. (If a student or colleague punched me in the face, I would press charges in court, rather than relying on the university to administer justice. By contrast, if two athletes got worked up and started fighting during a high-stakes game, it's possible that neither one would consider the incident worthy of legal action.) Plagiarism is not always punished severely: a first offense or minor case may be treated leniently. However, the rules allow severe punishments because there are no courts to fall back on. By contrast, universities don't need to have special rules for how to punish a truly dangerous student. Upvotes: 8 [selected_answer]<issue_comment>username_4: First, I think it's debatable whether the premise of your question is correct. *Some instances* of plagiarism are punished more harshly than *some instances* of the other offenses you listed, but I'd have to see some hard data justifying the claim that as a general rule plagiarism is punished more harshly than violent behavior or theft. Until I see such data, I'll remain skeptical. Second, with regards to your statement that "I've always found it strange that the gut reaction many people have towards it seems to be worse than for getting punched in the face," I think there is another questionable premise there, namely that violent bad behavior is by its nature worse and more reprehensible than non-violent bad behavior. At the abstract level I don't think that's true. To take a much more extreme example, <NAME> [perpetrated a non-violent financial scam](https://en.wikipedia.org/wiki/Bernard_Madoff) that is estimated to have cost a total of $18B to thousands of investors, including leading to some people losing their entire retirement and life savings. I'm sure some of those people would absolutely prefer being punched in the face to what happened to them. Madoff is serving a prison sentence of 150 years, which suggests that the courts also think what he did was much worse than most violent crimes. Going back to the subject of plagiarism, the answers by username_2 and username_3 already do a good job of explaining why it is harmful and deserves to be punished (and you yourself said in your question that you understand and agree with that part). Now, some plagiarism cases are much more egregious than others, and certainly expulsion can be an appropriate response in some cases, whereas in other cases a warning and a failing grade in the assignment may be enough. The same is true for fighting, drug- or alcohol-related transgressions, or petty theft: all of these types of offenses can come in very mild varieties that would represent little more than a sign of typical late-teen immaturity and not warrant a severe punishment, but can also escalate to very serious levels where they even warrant a criminal prosecution. So the bottom line is, **it all depends on the precise details of the offense**. I strongly doubt it would be correct to generalize and say that plagiarism is either worse than, or not as bad as, other typical types of student misconduct. Upvotes: 5 <issue_comment>username_5: Plagiarism is contrary to the academic ideals, goals and aspirations. To put it bluntly, it is also contrary and damaging to the business model. In an ideal world, plagiarism should not be present in academia. There are also cultural elements at play. Some cultures are also more adverse than others to underhanded manouvers, lying and in consequence plagiarism. While some cultures only worry about face and being caught in the act, others have better morals ingrained in the society as a whole. In general, I would say the harsher punishments for this offense work as a detterrent and as an example for less scrupulous students. Upvotes: 2 <issue_comment>username_6: One factor that I don't see in the other answers so far is that plagiarism is **usually very hard to detect**: if a student copies an answer from an obscure internet website or book, only a tool such as TurnItIn *might* be able to detect it; if they copy from a student in another section or from a previous year, they may only be caught if the same TA graded both sections; if they paid another student or TA to write an answer for them, it might be entirely undetectable unless someone confesses. A student may also plagiarize in multiple classes, and might get off because each of the teachers who catch them decide to let them off with a warning. If punishment for plagiarism was lenient, students would be likelier to risk cheating, knowing that on the off chance they're caught, they'd only receive a minor punishment. Therefore, the punishment has to be harsh enough that students acting rationally realize that, even with a small chance of getting caught, the resulting punishment is severe enough to deter them from attempting it at all. (See [*Psychology of Academic Cheating*, pg. 144](https://books.google.com/books?id=IhkMvgxJvpgC&pg=PA144&lpg=PA144&dq=low+chance+of+getting+caught+severe+penalty&source=bl&ots=UlP_mcvkid&sig=uxFopIRCyrYtsq3bQ-hw7vTrBKo&hl=en&sa=X&ved=0ahUKEwj_wZW1qI3KAhUJ92MKHZ1dCugQ6AEIKjAD#v=onepage&q=low%20chance%20of%20getting%20caught%20severe%20penalty&f=false) for a possibly clearer description of this problem). My own approach to this is to scare my students at the start of the semester by telling them how seriously I take the slightest attempt at plagiarism, but then evaluating them on a case-by-case basis, and only sending the severest cases to my school's Honor Code Council. Upvotes: 6 <issue_comment>username_4: [*Note: I'm posting this as a second answer since it adds a new (and, I think, important) insight that's completely unrelated to my first answer.*] I think a key point to understand is that **the value system of academia is different from the value system of the rest of society**, and that that makes perfect sense. I don't mean that academia and everyone else disagree on what's moral behavior and what isn't -- I think by and large, in a qualitative sense, they agree on those things -- but they have *quantitative* disagreements on *the extent* to which certain behaviors are moral or not (and therefore how severely they should be punished). Specifically, **in academia, honesty and honest behavior (in a professional context) are much more prized than outside academia**. That is because this type of honesty is essential to the mission of academia, which is to advance human knowledge. So, for a professor to cheat on his wife is seen by other academics as not good in exactly the same way, and to the same extent, as it would be seen by anyone else. But for a professor to copy a section of a paper written by someone else and publish it in his own paper without attribution, would be viewed by other academics *much more severely* than it would be viewed by most people outside academia, because it is not just "ordinary" dishonesty, it is a special kind of dishonesty that discredits and harms the entire profession and its mission. Note that this is analogous to the situation with many other professions that have their own unique value systems and codes of conduct that are different from the rest of society. For example, lawyers care much more about confidentiality of their clients' information than other professionals, and that's why if you are a lawyer who disclosed some information about a client that you were not allowed to disclose, that would be viewed much more severely by your profession (and could lead to harsh punishments such as being disbarred and forbidden from practicing the law) than if you are just a random guy (even a lawyer) who was told a secret by a friend and told it to someone else without permission. The same violation of trust, which according to our normal moral code is just as bad in both cases, is interpreted completely differently according to the context in which it took place, since in the latter context a different value system and moral code would apply. (Similarly, doctors have their own unique codes and find certain behaviors unacceptable in a professional context that most people would not find very problematic. I could come up with some examples to drive home the point but this post is already getting a bit too long.) To summarize, although in my first answer I argued that it's not necessarily the case that plagiarism is punished more severely than any other offenses, here I want to argue that *even if it is punished more severely*, that could be rational and based on the unique value system of academia, which holds certain values, and in particular professional honesty and integrity, as much more cherished and important than the rest of society does. When viewed in this way, I think this situation makes perfect sense and is precisely what you'd expect to happen. Upvotes: 5 <issue_comment>username_7: As colleges and universities are often considered institutions of "higher education", where the mind is meant to flourish and whatnot, it shouldn't come as much as a surprise. As someone has mentioned, a legal system is already in place to deal with things such as physical violence and theft. These crimes are usually passed forward to local law enforcement, assuming the deed was indeed enough to fall outside of the boundaries of the law. It's simply out of the scope of the institution to focus on these matters. However, that is not to say that they should ignore them and action should definitely be taken if it becomes a problem. Ultimately, as the law is meant to protect those in society from physical violence, these institutions are meant to protect academic integrity. An offence such as breaking and entering frequently causes jail-time. Plagiarism could be viewed as the breaking and entering of the academic world, which should cause the suspension/expulsion to be reasonable. It's like cheating in a video game. You can get to the top very easily by cheating, but once you get there, everyone likely knows you've been cheating. If not, they will soon find out. Your skill level(s) will obviously not match what your previous work has displayed. Upvotes: 0 <issue_comment>username_8: **It has to be handled extremely harshly because of how massively advantageous to the student it is to plagiarize, and how massively unrepresentative it is of the student's abilities.** If it were not handled extremely harshly it would be more advantageous to plagiarize than not to, even with the possibility of getting caught (they probably won't get caught every time...probably not even much of the time). If it's more advantageous to plagiarize then not then it will be rampant (because people aren't dumb...they'll do what they need to do to be the best...which in this scenario would mean plagiarizing constantly in order to compete with the others that do and they will recognize that). If plagiarizing is rampant then it means the university is putting out people that don't actually know the things the university is giving them accredited documents saying they know. If a university is handing out degrees to people that can't do the stuff the degree says they can...the university's prestige is what takes the hit...leading to it getting less students and being able to charge less as an institution...leading to all sorts of problems up to and including actual failure of the university. **It is literally a matter of survival for the university itself to punish plagiarism so harshly.** That way the punishment is so harsh that no advantage from the act is worth even the minute possibility of getting caught. If they put out someone that does drugs and punches people...but knows their degree inside and outside then that's a hit on the person, not the university. But if they're putting out sweet angels that are never high on anything but life and take care of elderly people trying to cross streets in their spare time, but don't know their degree, the university suffers greatly for that. So it's not so much that the punishment reflects the crime as the punishment reflecting how much the crime affects the entity handing down the punishment. Upvotes: 3 <issue_comment>username_9: **Plagiarism can be very costly to other students both past, current and future.** Plagiarizing allows a student to get a better grade; this lets them get a “good” job that they then fail at due to not understanding what they claimed to have studied. This lead to the university getting a bad name and the given employer not trusting the grading of anyone form that university, so hence, not even interviewing students of the given university. **Plagiarism can be of great benefit to the student.** For example they get a better job, due to the better grade. A 1% difference in grade an effect if someone gets an interview to train with a top company. Having trained with the top company, it can increase someone income by $50K per year for the next 30 years! **Plagiarism is hard to detect.** Firstly it is hard to define plagiarism, what if a computer student asks a question on StackOverlow, gets an answer in C#, then rewrites it as Java….. As to checking if a student asked someone else to do the work, we all know it is very hard to detect and only people unskilled at plagiarism get detected. **Therefore** Given the high benefit, and the low likelihood of detection, the punishment must be great enough for a student not to take the risk. If the risk of being detected is just having to redo the work, then a student has lots of chances of learning how not to be detected with little cost to the learning - hence even in the 1st year there must be a real cost to getting detected. **Other violations…** Stealing form anther student for example, will not give the same long term benefit as getting a better grade, therefore does not need as high a punishment to prevent it. Upvotes: 0 <issue_comment>username_10: While I don't think this is **the** reason (*Other answers give better primary reasons*), one thing I don't yet see covered by other answers is that plagiarism is almost always intentional, and obviously the blame is on the student who plagiarized, **so it almost always qualifies for the harshest punishment, when it is found.** (*There can be cases where plagiarism is unintentional, but those cases are usually readily apparent - and it is hard to say, "I didn't know that was plagiarizing" by this point in education.*) * Stealing is sometimes hard to prove. It becomes one person's word against another. * Violence is generally a "heat of the moment" result of a situation in which others may also share a large part of the blame. * Drugs are mostly a safety concern - and the more common ones, while they can be harmful to the user, do not directly impact the safety of others in most situations. Of course, where those scenarios do become "serious enough" - they can have penalties just as great as plagiarizing. The point is, there are more "arguable" situations in which someone might be accused of stealing, violence, or drugs, but not have actually done it or just doesn't warrant the full harshest punishment. Upvotes: 1
2016/01/03
1,937
8,903
<issue_start>username_0: This may be an issue of me not understanding the university/college teaching approaches in other countries. It was prompted by this question about student complaints - [How to adopt flipped classroom strategies without student complaints?](https://academia.stackexchange.com/questions/60981/how-to-adopt-flipped-classroom-strategies-without-student-complaints) In Australia and UK, every mathematics, physics, statistics etc course I have ever seen or taken has two types of classroom activities - generally referred to as lectures and tutorials. Typically there are 2 hours of lectures per week with an academic presenting the theory and doing some examples. The weekly 1 hour of tutorial has a tutor (usually a graduate student) going through the homework questions and is done after the students have attempted the questions. In addition, students can ask questions during the lectures and the tutorials. I don't understand the advantage of a flipped classroom. Having the students watch a lecture offline would mean they have to ask questions about that lecture without context and they would not have the answer to the question before watching the rest of the lecture. I would expect this to lead to lower understanding. Similarly, having the students work through the homework problems in the presence of the lecturer/tutor would substantially reduce the number and/or length and/or difficulty of the homework questions simply due to available time. A typical homework question set would take several hours to work through and usually have a couple of relatively simple questions that lead up to questions requiring deeper understanding. I would be concerned that running a flipped classroom would mean those deeper understanding questions would never be attempted and/or discussed. Could someone please explain the advantage of flipped classrooms, particularly how the problems I have described are avoided. And whether the advantages are only in comparison to some specific teaching method that differs from my experience.<issue_comment>username_1: First, I should note that the details of how people implement things that get called a "flipped classroom" seem to vary wildly, so my answers will be based on the version I've taught, and to a lesser extend the ones I seen my colleagues teach or heard them describe. > > Having the students watch a lecture offline would mean they have to ask questions about that lecture without context and they would not have the answer to the question before watching the rest of the lecture. I would expect this to lead to lower understanding. > > > In my opinion, the inability to ask questions on the spot is indeed a drawback. But there are substantial benefits that mitigate this. Most importantly, many students don't ask questions in lectures when they should. One of the major reasons is that they can't produce questions fast enough to keep up with the lecture; by the time they realize they're confused and formulate a question, the topic has moved on. In that sense, for many students a video is no different from a live lecture in this respect. And with a video, students can watch at their own pace, which can include pausing the video to formulate a question (and possibly even get it answered) before continuing, or rewatching the video after the question has been answered. Overall, moving lectures out of face time is one of the main benefits of a flipped classroom: most lectures *aren't* all that interactive---few questions get asked, and only by a few students, so the value added by the interaction is low. It's not zero, so the goal of a flipped classroom had better be to replace the lectures with a *more* valuable use of that time. > > Similarly, having the students work through the homework problems in the presence of the lecturer/tutor would substantially reduce the number and/or length and/or difficulty of the homework questions simply due to available time. A typical homework question set would take several hours to work through and usually have a couple of relatively simple questions that lead up to questions requiring deeper understanding. I would be concerned that running a flipped classroom would mean those deeper understanding questions would never be attempted and/or discussed. > > > Every flipped classroom version I've seen *also* has out of class homework. They have the simpler questions take place out of class, and have at least some of the deeper ones take place in class. Furthermore, most of them try to make the deeper questions more deep than the ones that would have been asked in the corresponding lecture class, or if not, they at least demand that students engage more with the deep questions. Indeed, this point is one of the main goals of flipped classrooms: with the instructor (and possibly TAs present), it becomes reasonable to demand more (and different) things of students because there's help available on the spot. In my lecture classes, for instance, I usually find that there's very poor engagement on the deeper homework questions. A non-trivial fraction of students skip them (since they're generally not *that* big a fraction of the homework grade), and a large number of students don't take them seriously---they either scribble nonsense for partial credit, or it's clear that they've worked in groups and essentially copied an answer they don't understand from the one person in the group who got it. In a flipped classroom, there was much less of this: the students sat there and worked through the hard problems, thought about them, and *struggled* with them the way they needed to, because I was there to make sure they did. Upvotes: 5 [selected_answer]<issue_comment>username_2: This answer isn't really distinct from username_1's, but I thought a second view might be of interest. I have been using what might be described as a flipped classroom for one of my modules recently. You might say it's 'not fully flipped', but at least it fits generally with the broader concept of 'active learning'. One reason I did so was to get my students to practice reading the textbook. My class is currently small, but there isn't time to answer every question for every student, so they need to be able to learn from a mathematical text. The problem is that students are not automatically any good at reading these texts, which are rather different to most other forms of literature. So rather than giving the students a video to watch, I gave them a chapter of the textbook to read, with a short quiz intended to test whether they have correctly read the text. My other primary reason was to change which part of the learning process takes place in 'lectures'. As I see it, understanding new concepts in maths takes several steps: * find out the definition * play with basic examples to understand the definition * use the definition to prove simple things * gain intuition, understand unusual examples, extremes and pathologies * prove more complex things * write out clear solutions In my experience lectures tend to focus heavily on the first and third steps. I felt that the information-transfer of step 1 was not really the best use of lecture time - the content itself can be communicated in other ways (like the textbook), and students are often not ready to ask their questions on first sighting of the definition. Also, I'm reasonably convinced that my students generally need the second step. Because lectures typically build very quickly on definitions that have just been introduced, students will struggle to make sense of much of the lecture at the time. By moving the information-transfer out of lectures, I've gained time I can use to let students attempt practice calculations for themselves, and address the questions they are stuck on. Of course there are also down-sides. It is harder to present as much information to the students this way, and I find it more time-consuming to prepare, because I have to think more about what the students will understand at a given point, rather than what I have already written on the board. Also, requiring prep means more work for the students (which is thought to be one big factor in why flipped teaching works well, and why students don't like it), so to compensate I've reduced the quantity of 'problem-sheet questions'. You mention in comments being concerned about introverted students. My personal view is that uniformity is not particularly helpful. Any form of teaching is likely to benefit some students more than others, and different styles also work better for different teachers. That aside though, I'm not sure that this style is intrinsicly problematic for introverts. For example, they would have more time to formulate any questions before class rather than during a lecture, so they might feel more comfortable asking for the help they need. Upvotes: 3
2016/01/03
692
2,862
<issue_start>username_0: I am an undergraduate student of software engineering, second year now, in a foreign country. My family's economic background isn't very strong; so I need to work in a restaurant for part time job in order to balance my personal finance. Recently, I got so much involved in my job that I stopped paying much attention to my classes. And since the language problem in this foreign country is a black spot for me, the classes are too difficult for me to be handled. Now, I am about to fail two main courses in this semester. Should I just cut off my job load and take these subjects next semester again or just give up with this place and look for some other place, where the communicating problems don't exist?<issue_comment>username_1: You should take two cases into account, simultaneously: * Concentration on the courses, deservedly. * Keeping on earning some money. ***You might be lucky enough to be recruited for an academic-driven working position in the university, as either TA or RA, instead of working out of the university.*** Such approach (as a potential solution) could lead to some accomplishments for you: 1. Your work is in the academic atmosphere... It not only lets you keep yourself in the mood of university, but will be considerably helpful to improve your academic linguistic skills, as the right thing you need, now... 2. Your current working site might be far from the university. Transporting back and forth does waste both your time and energy (and even your money!). Involving in a job within the university lets you share your energy, in every aspect, within all of your plans, in a more optimized way. 3. Your salary could be even more by working in the university, if you could prove your both enthusiasm and capability to handle the tasks. 4. Such academic activities in the university (putting aside the acquired financial outcomes) will strengthen your progression within the path of studying and potential research; because, as an instance, you can find your research path, under the aegis of a RA position. A TA position would also lead to deepening all of the subjects, were which unresolved for yourself, partially, in a complete manner. 5. ... It does not sound to be a bad idea, does it?!.. Best Upvotes: 2 <issue_comment>username_2: I agree with [Matinking's answer](https://academia.stackexchange.com/a/61017/10220). Regardless of whether you can get an academic-driven job on campus, consider teaming up with another student taking the same courses. Your ideal foil would be fully fluent in the language of instruction, but weaker than you on independent research. The idea would be to work together on understanding the material, and also work as a team for group projects. Interacting with your study-partner might help improve your fluency in the language of instruction. Upvotes: 1
2016/01/03
1,276
5,489
<issue_start>username_0: I have been following a professor whose research I really like. To simply illustrate the impact of his research, let me quote a quote from Quanta Magazine: > > [He] says he usually gets two responses: “You’ve opened up a whole new theory, and you’re an idiot.” > > > We have exchanged many emails, and I have nailed my proposed research to the point where I believe that it suits his research very well. I am really satisfied with this. However, I lack 5 points in TOEFL to reach the minimum requirement of this school. So I have to apply to other professors in other schools. Because he has opened up a new theory, other professors research doesn't have that much in common with my proposed one. I have no complain about that, and I think I would still be happy to work on a research that deviates from my proposed one. The next professor who has research similar to my proposal, has refused me because he doesn't have intention to admit more student this year (but he says he will be glad if I apply next year). **Q**: If I really want to work on this research, can other professors advise me well? In my knowledge, I also need to work for them, to enrich their research, so chances are that I have to work on a different research. And if I do have to change my proposed one, which one is more advantageous, accepting it or waiting another year to apply? It's kind of a shame to spend a whole year preparing for grad school, then suddenly have to wait for another year; but I can deal with that.<issue_comment>username_1: I'd say that you should really focus on getting the prerequisite language score and move on with the research position that you've been pursuing for so long. I'm sure he isn't the only adviser that shares your research interest at the school you're applying to. Upvotes: 2 <issue_comment>username_2: You could find another professor that would be willing to supervise your research while it is not in his direct interest, so that you can start now but apply next year to the professor you want to work with. You would have to persuade him that you wouldn't get in his way, that you are worth his while, and that you *will* be accepted next year by that professor. In the mean time, regardless of what you do, you can work on the TOEFL score to raise it, but also continue reading on the topic you want to do research and if possible, start doing it (if it is in a field utilizing computers more than field experiments, or you can perform simulations) so you have a head start. This will also allow you to show the professor that you have already invested in the topic, and hopefully already have some results. Finally, remember that once you complete some research in another field, you can always get back to that professor with more experience, an expanded CV and possibly a couple of publications. Upvotes: 1 <issue_comment>username_3: Don't put all your eggs in one basket. -------------------------------------- Until you've actually been admitted somewhere, it's a really bad idea to pin your hopes on only one advisor, only one school, or only one set of research directions. Sure, there may be one particular advisor that you seem to "click" with more than any others, but you should not think of that as your only option until you actually *know* that it's an option at all. Even under optimal circumstances, there is a good chance that you won't be admitted to a particular school, or that a particular advisor isn't taking new students, or that your newest grant proposal won't be funded by a particular agency, or that your latest paper won't be accepted by a particular journal, or that your latest experiment will fail, or that you won't be able to hire a particular student into your research group. It doesn't matter how hard you want it, or how much you deserve it; you will never get everything that you want. And you are applying under less than ideal circumstances—your TOEFL is below the minimum requirement for your first-choice department. Unless you can improve your score before they make admissions decisions, it's safe to *assume* that you will not be admitted. Don't *worry* about it; don't be *afraid* of it. Accept it and move on. But don't withdraw your application! **Meanwhile, cultivate other options.** One of those options might be to spend a year improving your language skills. Another might be to apply to other advisors in the same department (although that's unlikely to work given your TOEFL score), or to other departments. Yes, that might mean changing your proposed research, but your research career will span *decades*; there's plenty of time to go back to your initial proposal later. Or you might be able to find other advisors who will work with your original proposal. How do you know which option to pursue? **You don't. Try them all.** I generally advise undergraduates in my department who are interested in graduate school to apply to 5-10 different departments *and simultaneously* to apply for industry jobs. Apply to options that you think you *might* be a good match. Remember that any particular application has a small chance of being accepted, especially at top schools like Berkeley. Play it as a numbers game — your goal is to set up enough options, with a wide spread of probabilities, that you can reasonably expect to get a couple of offers. Once you have those offers, *then* start making a decision about which one to accept. Upvotes: 4 [selected_answer]
2016/01/03
1,442
6,258
<issue_start>username_0: I wonder why most publication venues don't systematically make the LaTeX source for published papers available? (which implies systematically asking authors for the LaTex source) LaTex source are more machine readable than PDFs, and make it easier for humans to reuse part of it (e.g. math equation or figures), amongst other advantages. I fail to see any downside. (I am aware that some authors write their publications using other tools such as Microsoft Word: let's ignore it.)<issue_comment>username_1: Those publication venues that have use for the code typically ask for it (e.g., EPTCS). But if a publication venue does not need the source, why should they ask for that? While in principle, LaTeX source code may look more readable than PDFs, there are many limitations to LaTeX source code. Those who ever tried to use a LaTeX2HTML converter know what I mean. As an example, there are documents that will compile with XeLaTeX, but not with LuaLaTex ... and vice versa! Furthermore, there are documents that will only work with the latest TikZ version. Then, there are documents that do not compile anymore with modern LaTeX distributions. So reusing LaTeX code later may need manual work to make the code work with modern TeX distributions. But also applications that make use of TeX code snippets are hard to do based on author-supplied TeX code. Copying a figure is difficult as the needed macros may be scattered through the complete document. Also, the figure code may depend on packages that can clash with other packages, which makes pasting them elsewhere difficult as well. Furthermore, searching in TeX code is difficult (which would be another application for which the source code could be used), as heavy use of macros may lead to the search term not being shown in the actual code. Both of these issues do not exist with the later PDF. Upvotes: 4 <issue_comment>username_2: I think your question is actually about why publishers don't make the LaTeX source of papers available to readers rather than why publishers don't accept submissions in LaTeX form. You might want to clarify your question. Some publishers prefer to accept PDF versions of paper for review, but then ask for LaTeX source code after the paper has been accepted. Doing the peer review process with a PDF version of the paper saves the publisher the trouble of running the paper through LaTeX and fixing any problems that the authors might inadvertently have introduced into the manuscript (such as using non-standard packages of macros.) At the final publication stage, authors typically submit LaTeX source to journals. The journal then applies its own style to the paper, adds copyright notices and page numbers, and produces a final version of the paper using LaTeX. However, journals typically only publish PDF versions of the papers rather than the LaTeX source. Many journals make their style files available to authors and ask them to prepare the manuscript using the journal's style. This helps to avoid problems when the final version of the paper is prepared by the publisher. Having the LaTeX source of a paper makes it slightly easier for plagiarists to cut and paste mathematical formulas and text from the paper or to maliciously produce alternate versions of the paper. Commercial publishers are also generally opposed to any use of a paper that goes beyond simply reading the paper- making LaTeX source available tends to make such reuse easier. Upvotes: 5 <issue_comment>username_3: I suspect that this is a solution in search of a problem. Most likely journals aren't making LaTeX versions of papers available because they don't see a demand for such a service from the side of the readers; this would take effort to implement and maintain; and some authors would object to the idea of making it easier for others to reuse (read: plagiarize) their papers. In other words, publishers simply have (or at least think they have) better things to do with their time and money. It should be noted that arXiv *does* make LaTeX source code available, and in fact will only accept submissions in the original source code rather than as PDFs (for papers that were written in LaTeX), so in those areas of math and physics where uploading preprint versions of one's paper to arXiv is the norm, this "problem" (such that it is) is already solved. Upvotes: 5 <issue_comment>username_4: A substantial number of all-OA publishers *do* offer machine-readable versions of papers - they just do so as XML rather than as LaTeX. See, for example, the XML links on these papers in various journals: * [Cryosphere (EGU)](http://www.the-cryosphere.net/9/2417/2015/tc-9-2417-2015.html) * [PLOS One (PLOS)](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0145904) * [Journal of Economic Structures (SpringerOpen)](http://www.journalofeconomicstructures.com/content/5/1/2) * [Remote Sensing (MDPI)](http://www.mdpi.com/2072-4292/8/1/32) As the XML is probably more machine-readable than the LaTeX source, why go to the extra effort of providing an intermediate format - especially one with all the reuse/interpretation problems that DCT & others refer to? Upvotes: 3 <issue_comment>username_5: LaTeX files are *less* searchable than PDF and are almost always useless on their own. Next to the packages problem already mentioned here LaTeX files often use a lot of other files as input, which makes redistribution a hassle. This also makes the file less machine readable. A search engine such as Google will not understand where a given image, or other other input file will appear in the text and thus will not link the two, hurting the context and search results the file is placed in. This is not the case with PDF files where everything is grouped together (or at least understood where it should be placed, see the HTML view Google makes of PDF files). Now one could argue that you should publish all files required to build the LaTeX file. But even in my extremely limited academic experience I have found cases where this is impossible. Some tables in my thesis were generated by LaTeX using 500MB of raw data. It would be crazy to have to distribute these (or for a machine to need to parse these). Upvotes: 1
2016/01/04
256
1,036
<issue_start>username_0: One of the graduate schools I'm applying to asks if I would like to be considered for admission even if they can't fund me. Would it hurt my chances of receiving funding if I checked "yes?"<issue_comment>username_1: That very much depends on the local policy. In any case, if I had funding for, say, 5 students and could take on 10, I'd fund the 5 best (or the 5 best which asked for funding), and got 5 out of the rest. But that's me... Upvotes: 1 <issue_comment>username_2: It shouldn't but it might. For example, in my department, if you are the best candidate then **you will get the funding no matter what**. However, if you have **exactly** the same qualifications with another candidate (almost never happens) and we have funding only for one, we would offer the other candidate funding and to you to come without funding. Of course, once at the department, we would try to get funding for you for the next semester/year. All these, depend on the department policy though. Upvotes: 3 [selected_answer]
2016/01/04
5,831
23,796
<issue_start>username_0: Researchers on education must have thought about this, so I hope to find some directions here. Suppose that I am grading students on a scale from 1 to 10. If I have *no prior knowledge of the ability* of the students but have obtained their test scores (let's say from 1 to 100). Suppose these test scores have only ordinal meaning. A student with a higher score can be said to have achieved better mastery of the course but having double the score does not mean that one has achieved double as much as another student. We may also assume that a (statistically speaking) large number of students representative of the student population took the exam. What should be the optimal grading distribution? We may also assume that the grading distribution/scale should achieve two goals: a) it should be informative about student's grasp of the material, b) it should incentivize students to study the material. Regarding a) from an information theory perspective, we may want to maximize the information (entropy) of the grade distribution. Thus, we would choose a scale which yields a uniform distribution. However, in practice most teachers implement distributions which are peaked. What is the motivation behind this?<issue_comment>username_1: Assigning grades to fit some "optimal distribution" is misguided. We don't *want* to maximize the entropy of the grades in a particular course. It's not a very useful measure of a "good" set of grades. To quote from an [answer by Anonymous Mathematician](https://academia.stackexchange.com/a/15338/11365): > > Strictly speaking, Shannon entropy pays no attention to the distance between scores, just to whether they are exactly equal. I.e., you can have high entropy if every student gets a slightly different score, even if the scores are all very near to each other and thus not useful for distinguishing students. > > > (Note that this was in answer to a question that asked about using the entropy of exam scores as an indication of how good an exam is at distinguishing between different levels of mastery. It wasn't suggesting that grades should be curved *a posteriori* to maximize entropy.) What we *actually* want is for grades to signal as closely as possible the students' [mastery of course material](http://www.covingtoninnovations.com/mc/grading.html). If every student in the course has achieved truly excellent mastery of the course material, they should all get high scores. This grade then *seems* to carry very little information. But in reality, it is much more useful to say that all students in this particular class achieved excellence and deserve a 10/10, than it would be to maximize the "information" carried by the grade and give some students a 1/10 because their performance was slightly less excellent than the highest level of excellence achieved by a student that year. This scenario (where all students achieve excellent or very good grades) is not even that unusual, as [<NAME> points out](http://www.covingtoninnovations.com/mc/grading.html): > > In advanced courses, it can be quite proper for all students to get A’s and B’s, because weak students would not take the course in the first place. > > > For an individual student, the grade should depend on that student's demonstrated mastery of course material, and hopefully not at all (or as little as possible) on the other students in the class. If you insist on thinking about it from an information theoretic perspective, what we *really* want is to minimize the [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between the distribution of students' achievements and distribution of students' grades. If you have no information about the exam and what it measures, you cannot assign meaningful grades based on that exam score. If you know about the exam, you *can* assign meaningful grades based on exam scores, but not according to any optimal distribution - you would assign scores based on how much of the exam students are expected to know to demonstrate various levels of mastery, not by mathematically shaping the grades into some predetermined "optimal distribution." > > Edit: Suppose these test scores have only ordinal meaning. A student with a higher score can be said to have achieved better mastery of the course but having double the score does not mean that one has achieved double as much as another student. > > > Your edits are not going to change the answer; there's still not going to be an optimal distribution. If I have many excellent students in the class and I do an exceptionally good job teaching them, I'll give out many excellent grades, no matter how they rank with respect to one another. If all of my students are terrible and do poorly, they'll all get low grades even if one manages to get in a few more points than another (although in that case, I'll also take a closer look at the class to see why students are doing so poorly.) If half of my class excels and the other half fails to meet minimum standards, I'll give out 50% top grades and 50% failing grades. If their abilities happen to be normally distributed, their grades will be as well. You get the idea. The distribution of student grades should follow the distribution of demonstrated achievements. Any grade distribution that *doesn't* is definitely *not* optimal. --- The idea of grading students to some predetermined distribution (of any shape) is known as "norm-referenced grading." For more information about alternatives to norm-referenced grading, see: * <NAME>. "Interpretations of criteria‐based assessment and grading in higher education." Assessment & Evaluation in Higher Education 30.2 (2005): 175-194. * Aviles, <NAME>. "Grading with norm-referenced or criterion-referenced measurements: to curve or not to curve, that is the question." Social work education 20.5 (2001): 603-608. * <NAME>. "Norm-referenced grading in the age of carnegie: why criteria-referenced grading is more consistent with current trends in legal education and how legal writing can lead the way." Journal of the Legal Writing Institute 17 (2011): 123. * <NAME>. "Grading policies that work against standards... and how to fix them." NASSP Bulletin 84.620 (2000): 20-29. These go into some more detail about problems with norm-referenced grading. You asked for a grade distribution that is "informative about student's grasp of the material." The literature I have cited explains that pure norm-referenced grading is not informative about that; it can only inform about student's *relative* grasp of the material, compared to others in the same group who have taken the same exam. In other words (emphasis mine): > > It can be useful for selective purposes (e.g. for the distribution of a scholarship to the 5 best students, or extra tuition to the 5 which are struggling most), but **gives little information about the actual abilities of the candidates.** > > > Source: McAlpine, Mhairi. Principles of assessment. CAA Centre, University of Luton, 2002. In the middle of the 1990s, the Swedish secondary school grading system was changed from a norm-referenced to a criterion-referenced system. Thus it became possible to compare (for the same population) the ability of a norm-referenced grading system and a criterion-referenced grading system to predict academic success. The first paper looking at this Swedish data set is not written in English. (<NAME>. (2004). De målrelaterade gymnasiebetygens prognosförmåga. [The Predictive Validity of Goal-Related Grades from Upper Secondary School]. Pedagogisk Forskning i Sverige, 9(2), 129-140.) However, in a later paper, the author describes those results as follows: > > Cliffordson (2004b) showed in a study of 1st-year achievement in the Master of Science programs in Engineering that the predictive validity of CRIT-GPA was somewhat higher than it was for NORM-GPA. > > > In that later study, Cliffordson found (consistent with the earlier study) > > a somewhat higher prognostic validity for CRIT-GPA > than for NORM-GPA. > > > and that across a variety of disciplines, > > Despite differences in both design and purpose, the predictive efficacy of criterion-referenced grading is at least as good, or indeed is somewhat better, than that of norm-referenced grades. > > > For details, see: <NAME>. "Differential prediction of study success across academic programs in the Swedish context: The validity of grades and tests as selection instruments for higher education." Educational Assessment 13.1 (2008): 56-75. Upvotes: 7 [selected_answer]<issue_comment>username_2: Without trying to evade your question, it is extremely difficult to quantify the impact of different grading distributions. This makes it not possible to find an optimal solution. You state 2 goals which makes optimization difficult. Further, your goals are not well defined. On the surface, the goal of motivating students seems laudable, but we need to be careful about what we are motivating. We do not want to motivate students to strive for higher grades (i.e., grade grubbing). Nor do we want to promote a cutthroat environment where students attempt to increase their grade, by sabotaging fellow students. We want to motivate students, at both the individual and group levels, to increase their understanding. Education research is pretty clear that external motivators, like grades, do not promote the type of learning teachers strive for. Your second goal is for the grades to be informative. But you do not define whom you are trying to inform. Do we want grades to inform students about their level of understanding, or do we want the grades to inform potential employers. From my understanding of education theory, summative assessments are not part of the learning process while formative assessments are. Further, it is not clear if letter/numerical grades are useful for formative assessments. Upvotes: 4 <issue_comment>username_3: The optimal grading distribution looks exactly like the distribution of capability in the field that the members of your class have. Now, that's not really something you can easily or perfectly measure, but that's your target. If you could follow your students and grade their performance in later life at a few milestones, you could conceivably estimate the information snr in your grading. That's where you'd apply those concepts meaningfully. That has nothing to do with the shape of the distribution of *grades*, however. You really can't assume any specific shape as valid or not unless you know the distribution of the population. Upvotes: 4 <issue_comment>username_4: <NAME>, a guru in the quality movement, thought that there should be only three outcomes: "Student masters the material", which is the objective of the course and the main responsibility of the instructor, "Student shows exceptional proficiency" (those he regarded as outliers and would hold out as exceptional achievements), and "Student did not master the material" (outliers at the other end). To students in the first group he would give an A (that would be a large majority), students in the second group would get an A+, and students in the third group would get an F. Upvotes: 3 <issue_comment>username_5: My undergraduate course tutor's research specialism was statistical mechanics. He wrote the "grade normalisation" software for my university course which was Physics. It would take students raw trades and spit out the final UK grade (1st, Upper 2nd, Lower 2nd, 3rd, Fail). He told me that the idealised distribution should be a normal distribution; students would be randomised around a mean with more or less ability, applying more or less effort, with more or less distractions of being undergrads (aka over partying), and as those factors were random [\*]. My course tutor lamented that the problem was that the actual grade distributions on typical exams was more like an inverted normal distribution; bi-modal with a group of folks flunking, a group of folks acing the exams, and very few people doing "average". This was largely because exams are not perfect and hard to pitch correctly; too hard and folks flunk and too easy and too many folks ace them. So the purpose of the software he wrote was to try to renormalise the individual exams against the student population sitting them. Rather than perfect exams grading a student population the imperfect exams can be normalised against the student population. He was a very smart scientist who had been an educator for decades so I suspect that his intuition and approach was sound. [\*] Of the two brightest guys that I still know two decades later with one got a triple 1st and a college medal, the other flunked out due to over-partying; the friend who flunked his undergraduates Physics recently took a professorial role at a respectable university with a research specialism of international development. Upvotes: 0 <issue_comment>username_6: Your question is really impossible to answer because you don't give us the measurable goals of your grading distributions. **Some sort of Bell Curve (example #1)** This is clearly flawed because it poses no distinction of the mastery of a course nor does it show the success or flaws of the professor. If you were a really good teacher and had really good students a "bell curve" approach would mean that students that actually had a really good grasp of the subject material could get an F. Let's just say it is a rather simple class and you are an incredible teacher. The grades on a percentage basis are 88-100%. Do the 88% students get Fs, the 90% students get Ds, and so on? And if so why would anyone with any brains take a class like this? I would have to rely on others being dumber, to get a better grade? I would think what if everyone got a 100% except for me and I got a 97%? It makes no sense. (And I know I am being arbitrary using %s as a lot of classes aren't graded like this but just using this for an easy example. **Some sort of Bell Curve (example #2)** Well same situation but this time the class isn't a "normal" class. It is at a university that has 50 openings into their engineering school with 1000 kids taking classes to get in and they take a set number of classes. Well then we might want to grade using some sort of bell distribution because it is comparing the students, not their mastery of the course. After doing this for a few courses you could probably weed out to around 50 students. **Other distribution models** The problem with stating that given a large population that you should have a certain amount of certain grades is really - as mentioned above - about comparing the students to each other and has very little to do with their knowledge of the material of the course. Even with a simple curve model where you give the best student a 100% and grade from there you could run into some major issues while having a poor rendering of grades. If you were a terrible teacher and students had poor mastery the top student might have a 80%. Do you just bump everyone by 20% and feel good about it? Also when doing things like that how is this reflected in your grading strategy? For instance what if the first half of your tests are relatively easy and every performs at A level. Then the next half is very very hard. The person getting an 80%, gets bumped to 100%. But then the person who got the first half all right but then missed everything on the harder material... they still get a C? **What do you do?** If your course is part of a core piece of learning and taken by most of the student population you want to work with your school on what they expect their students to get. Possibly look at the historical records for like classes. I am sure an Intro to English class has different scores at upper-tier universities compared to state colleges. From there set objectives for the students and let them know what is expected. If everyone does what is expected and masters the material, they all get As. If no one shows up for your classes and learns nothing, they all get Fs. The hard part is adjusting the class if you feel that the grading scale is too easy or too hard. But there shouldn't be a fixed distribution. If we are talking English 101 at Harvard we are seeing at least 70-80% As. For a upper level class a lot of the distribution goes out the window. If you are giving a science class in a specific area you must clearly grade according to how well the student grasped the concepts and materials. There grade should be indicative of how well they can expect to perform if they go to the next level. If you were teaching Quantum Physics and had a class of 10 that were absolutely below average would just give some As and Bs out? Then when they take Quantum Physics II the next professor is like "what the hell, these students aren't ready for this class." Upvotes: 3 <issue_comment>username_7: My philosophy professor's grading philosophy was that by very definition, most people are average (i.e. C) Students asked if he graded on a curve and his response: Yes - if my grades don't fall on a bell curve with the bulk of the class making C, several D's and B's and maybe a couple of A's and F's then I've either made my tests too easy (grades skewed to the A) or too hard (grades skewed to F). I suppose it depends on if you're trying to push a few of your students to succeed, or if you're trying to help as many students as you can get good grades. Upvotes: -1 <issue_comment>username_8: I agree with other answers: there is little sense to change your grading paradigm a posteriori until you get some distribution. However, it can be useful to test your assumptions by formulating your expectations before inspecting the actual statistics. What should the overall distribution look like? What distribution should which individual problems exhibit? Should a certain subset of the problems be accessible to almost all students? How many of the problems do you expect a strong, average, weak student to be able to attempt in the alloted time? Do results on some problems correlate more strongly than they should, i.e. do they really test different skills? Then you can check if the data fit your expectations. If they do not, you did something wrong. Here are two examples. ### Multimodal Distributions Experience suggests that you probably get something like a Bell curve if there is nothing out of the ordinary going on. In other words, if you do *not* get such a curve further inquiry may be warranted¹. As a specific example, if you get [![A multimodal histogram](https://i.stack.imgur.com/xNAPu.png)](https://i.stack.imgur.com/xNAPu.png) you want to look for a criterion that separates the participants into two (or more) groups that explain the two bumps: [![An explanation: two groups of students](https://i.stack.imgur.com/gfKbC.png)](https://i.stack.imgur.com/gfKbC.png) Possible criteria (which you can likely check) are course of study, participation in exercises (or some other course-related activity), gender, and so on. You can then take measures to ensure your course works equally well for everybody. ### Hardness of a problem is off What you expect to see is this: [![enter image description here](https://i.stack.imgur.com/D71iQ.png)](https://i.stack.imgur.com/D71iQ.png) That is, most students solve (very) easy problems well (these should form the baseline for passing the exam, if that), average problems distribute normally (these determine everything between passing and B), and few students solve the hard problems (those that do probably get an A). If you see big derivations from your expectations, you may have missed the mark when formulating a problem (which can influence your grading decisions) and/or your course did not promote the necessary skills as you thought it would. --- 1. The other direction does not work, i.e. there may be issues you do not see in a histogram. Upvotes: 3 <issue_comment>username_9: Years ago I was involved in the grading of entrance exams for a well-known university. Our goal, in grading, was to provide the maximum differentiation: as the purpose of the exam was to find the top candidates, whilst recognizing that for some the physics portion of the exam would only carry limited weight, I was asked to ensure that my grading was sufficiently lenient (this was a VERY hard exam) that the median would be 20/40; this required giving partial credit to partially correct answers, and did in fact produce scores from 2 to 39. Yes, that one paper scoring 39/40 on a really hard exam is something I remember 30 years later... My point is this: "grades" can mean different things to different target groups. Questions one might ask: 1. Is the student sufficiently prepared to take the "next" course? 2. Does the student merit special recognition (scholarships etc)? 3. Does the student qualify for some "limited access" position (a job, entrance into a graduate program, etc...) The first point should be "objective", without regard for distribution. That is, an exam should be constructed to test the necessary knowledge, and whether the person has it or not should be independent of the grades of all other students. Studying the distribution is possible helpful to the instructor (are you making "good" exams?) - but ultimately asking 10 hard questions, getting on average 4 good answers, and deciding to set the pass grade at 4+ does not guarantee that people who pass the exam have mastered the material. The second point relies less on a distribution, and more on a "cutoff". Perhaps there are 5 scholarships: you award them to the top 5 scoring individuals. If you aggregate over multiple exams, you take their rankings on all exams (perhaps eliminating N outliers) to come up with a final ranking. The final point is the only one where a distribution *might* be helpful. Unfortunately, most grading schemes don't follow anything like the common sense approach that I describe... Upvotes: 3 <issue_comment>username_10: The right distribution to use is the one that the students are used to. I would dispute your assumption that more entropy is always better than less. Lets imagine a student who typically receives a B grade, with grades going from A through D, and with F as well and various pluses or minuses, and there is a bell curve concentrating the class's results around a C+ They are content with a B, they are happy with a B+ and so on. They know broadly where they sit relative to their cohort. Then, Professor Entropy decides to give letters all the way through W, with F still being a fail (F is the lowest grade with W-- being the 2nd lowest) and +/- still being used. The bell curving has been replaced with a flat distribution. The student receives a H+ for their work. This has a lot more entropy, but tells the student almost nothing about how well they did compared to how well they should have done, or how they did in their other subjects (C+, B- and A- using the old system). If the student wants to compare their mark to other marks they have received, they will have to convert the mark you have given them into the same distribution they are used to. You want to compare the students to each other. They want to compare themselves to a theoretical self that studied a bit harder, or DID make time to go to that party, or moved back to subject ABC instead of XYZ where they recently switched to. Are you the target audience for the grades, or are the students?? Apologies for lack of citation, but this is as much disputing your premise/assumptions as it is an answer. Upvotes: 1
2016/01/04
1,122
4,841
<issue_start>username_0: What are some examples of visually appealing papers, preferably with a two-column layout? Basically, this is for a publication that is aimed at a more general public and should be appealing and approachable but at the same time, I'd like to maintain academic approach, structure and feel to it. So I'm looking for examples of papers where the author put some work into polishing the design, perhaps with some conservative use of color and non-standard elements.<issue_comment>username_1: A good place to find examples of highly visually appealing mass-audience articles is in "magazine journals" published by academic/professional societies. Some examples that I happen to be familiar with are: * [IEEE Computer](http://www.computer.org/web/computingnow/computer) * [Communications of the ACM](http://cacm.acm.org/) * [AI Magazine](http://www.aaai.org/Magazine/issues.php) These sorts of publications have technical articles, but they are published for a very broad audience and are professionally edited, including by graphical designers, in order to ensure that they are visually appealing as well. Upvotes: 2 <issue_comment>username_2: The final look of your paper is typically not up to you (the author) but up to the journal (the publisher). I cannot say if this applies to all sciences but in any kind of research I have been a part of, you: 1. prepare a manuscript, in accordance with the user guidelines that typically make you prepare a word, TeX or pdf file that is plain text with enough space in between the rows, with figures and tables typically in the end, 2. upon acceptance of the manuscript, a proof is prepared by the editors (typically not the scientific editors but the graphical editors) of the journal which you (as the authors) get to read and comment on. For instance if a figure is badly placed, or the table is wrongly set etc. 3. Once the proof is reviewed by the authors, and is approved it gets printed like that. I have never heard of a journal where the users choose how their papers look like in print. Allowing each and every author to choose layout, font, color etc goes against most publishing conventions. That being said, there are certainly things you can do as an author to improve your papers visual appearance: ### A) Get your figures right! 1. If you are going black and white publication then make sure there are no color coded information on your figures. If I had a penny for every time I saw a B&W figure where the blues and blacks are indistinguishable... 2. Make sure your figures are at least in 200 dpi resolution (preferably 300 dpi or more) 3. If possible, make sure your figures are vector-based so that they can be scaled up/down without any loss to clarity. 4. Make your figures in reasonable sizes; too large or too small figures are difficult to place properly when the editor has a strict layout to work with. This issue becomes more prominent for compound figures. Some journals even have limits on how many panels you can have in a figure. ### B) Get your text right! Reading a large body of text is not trivial, especially if the text in question isn't written in the easiest of languages. I have heard the following phrase numerous times over the years: "*An academic text isn't written to be read, it's written to say as much as possible with as little space as possible.*" In order words it's written to be published. Since it's difficult to read articles as is, you (as the author) can make the readers' life easier or harder by your choice of words, as well as accuracy in punctuation and grammar. Here are a couple of general tips: 1. be consistent in your choice of voice; passive voice is supposed to be harder on the reader but it is more common in academic writing. Nevertheless, if you start with "we did XYZ", stick with that. Constant alternation between active and passive voice, raises the question of "*who?*", every once in a while.. 2. Try to keep your paragraphs to reasonable sizes. Too long paragraphs makes the reader lose attention, single sentence, one/two line paragraphs are distracting. Try to keep your paragraphs away from those extremes as much as you can. Each paragraph should optimally be a single "move", you talk about a single bullet-point. 3. Break up your text with headings/subtitles into reasonable and meaningful chunks (4-6 paragraphs is pretty good IMHO). Make sure these headings are explanatory for what you will be describing in the coming subsection. These not only serve to separate long bits of texts from each other, but also give the reader a framework to place the newly acquired information and make it easier to see the overall point of the paper. Also it makes relocating a specific bit of information MUCH easier several years down the line. Hope it helps. Upvotes: 3 [selected_answer]
2016/01/04
1,101
4,737
<issue_start>username_0: I am finishing up my thesis and am currently focusing upon the conclusion. There I have a section on future work. I have quite a few areas that I think require follow up or investigation. However, is there such a thing as listing too many areas of future work in that it will look like one hasn't done enough to tackle them? Or that the research is sub-par?<issue_comment>username_1: A good place to find examples of highly visually appealing mass-audience articles is in "magazine journals" published by academic/professional societies. Some examples that I happen to be familiar with are: * [IEEE Computer](http://www.computer.org/web/computingnow/computer) * [Communications of the ACM](http://cacm.acm.org/) * [AI Magazine](http://www.aaai.org/Magazine/issues.php) These sorts of publications have technical articles, but they are published for a very broad audience and are professionally edited, including by graphical designers, in order to ensure that they are visually appealing as well. Upvotes: 2 <issue_comment>username_2: The final look of your paper is typically not up to you (the author) but up to the journal (the publisher). I cannot say if this applies to all sciences but in any kind of research I have been a part of, you: 1. prepare a manuscript, in accordance with the user guidelines that typically make you prepare a word, TeX or pdf file that is plain text with enough space in between the rows, with figures and tables typically in the end, 2. upon acceptance of the manuscript, a proof is prepared by the editors (typically not the scientific editors but the graphical editors) of the journal which you (as the authors) get to read and comment on. For instance if a figure is badly placed, or the table is wrongly set etc. 3. Once the proof is reviewed by the authors, and is approved it gets printed like that. I have never heard of a journal where the users choose how their papers look like in print. Allowing each and every author to choose layout, font, color etc goes against most publishing conventions. That being said, there are certainly things you can do as an author to improve your papers visual appearance: ### A) Get your figures right! 1. If you are going black and white publication then make sure there are no color coded information on your figures. If I had a penny for every time I saw a B&W figure where the blues and blacks are indistinguishable... 2. Make sure your figures are at least in 200 dpi resolution (preferably 300 dpi or more) 3. If possible, make sure your figures are vector-based so that they can be scaled up/down without any loss to clarity. 4. Make your figures in reasonable sizes; too large or too small figures are difficult to place properly when the editor has a strict layout to work with. This issue becomes more prominent for compound figures. Some journals even have limits on how many panels you can have in a figure. ### B) Get your text right! Reading a large body of text is not trivial, especially if the text in question isn't written in the easiest of languages. I have heard the following phrase numerous times over the years: "*An academic text isn't written to be read, it's written to say as much as possible with as little space as possible.*" In order words it's written to be published. Since it's difficult to read articles as is, you (as the author) can make the readers' life easier or harder by your choice of words, as well as accuracy in punctuation and grammar. Here are a couple of general tips: 1. be consistent in your choice of voice; passive voice is supposed to be harder on the reader but it is more common in academic writing. Nevertheless, if you start with "we did XYZ", stick with that. Constant alternation between active and passive voice, raises the question of "*who?*", every once in a while.. 2. Try to keep your paragraphs to reasonable sizes. Too long paragraphs makes the reader lose attention, single sentence, one/two line paragraphs are distracting. Try to keep your paragraphs away from those extremes as much as you can. Each paragraph should optimally be a single "move", you talk about a single bullet-point. 3. Break up your text with headings/subtitles into reasonable and meaningful chunks (4-6 paragraphs is pretty good IMHO). Make sure these headings are explanatory for what you will be describing in the coming subsection. These not only serve to separate long bits of texts from each other, but also give the reader a framework to place the newly acquired information and make it easier to see the overall point of the paper. Also it makes relocating a specific bit of information MUCH easier several years down the line. Hope it helps. Upvotes: 3 [selected_answer]
2016/01/04
2,999
12,048
<issue_start>username_0: I have a tenure track job offer from school S. This school appears to have a two-stage strategy to negotiate a low salary with candidates. Early in the process, HR discusses the salary range, [MIN, MAX]. At a later stage, when the first candidate is chosen, in the first step HR will negotiate with candidate, to a mutually-agreed compensation; that's proposal A. After A is reached, HR will say, this is now my proposal; department still has to approve my proposal! Then comes the department head, which reduces the compensation further, to proposal B, likely to be the MIN value! The department head states that this is the final offer and cannot be negotiated. I am in an excellent position to get several offers from other institutions. But results will appear in March or afterwards (some applications are now just being reviewed). On the other hand, if I accept the offer from S, it's not ethical to withdraw and join other schools later on. The strategy of S sent me a negative signal about the school administration. However, I much like the faculty there and the school and the program are great fits. I am giving it a second thought whether I wish to join S. I now have to decide: D1. Insist on proposal A (despite the fact that the department claimed that offer B is not negotiable) D2. Reject the offer, do not live with a low salary, and pursue other applications for good (including an rearlier delayed offer) D3. Accept the offer B, and don't argue over a few k USD a year. Using this experience as an illustrative example, I would like to ask some general questions. The questions might be of interest to all applicants seeking faculty positions, when they negotiate salary: Q1. Is two-stage negotiation a well-known business strategy to lower compensation package? Q2. Is it ethical for HR to agree to a proposal, and later claim that it was countered by higher authority? Q3. To what extent HRs have autonomy in negotiations? Q4. Statistically, how often "final offers" are bluff/final? Your answer could depend on who says "this is our final offer and non-negotiable," HR or upper-level authority. You may consider several cases for HR: HR1. HR declares his autonomy level "before hand." For instance, he or she may say "Before starting, I would like to brief you on our procedure ... ", or "I am in full charge." HR2. HR does not make any statement regarding his/her decision making independence. It may do so after a first agreement. HR3. Employer sets bounds (MINs and MAXs). By definition, HR should be able to make independent decisions withing those bounds. Otherwise, the HR, the candidate, and the negotiation process, are being dismissed and disrespected by institution (and what's the point of negotiation with HR).<issue_comment>username_1: Since you don't have other offers in hand, it appears your options are (A) reject the offer with the possibility of having no job or (B) accept the position because you value that more than the salary difference that you are negotiating for. But **don't** put your hopes on some future job offer from another school! Although it is bad to reject an offer that you have already accepted, I have heard of it being done and I think any reasonable person would do it too if they got a much better offer later on. As far as is this type of negotiating normal? I have always been told to not even negotiate salary for faculty positions since it is competitive enough, most people aren't going to reject the offer over 5-10k unless they have other offers, and some places have standardized salaries. Upvotes: 1 <issue_comment>username_2: To negotiate, you need leverage. This almost always means another offer or a credible threat that you can get another offer. > > I am in an excellent position to get several offers from other institutions. But results will appear in March or afterwards (some applications are now just being reviewed). > > > If the following conditions are met: * you believe that * they believe that that is possible * they believe that you believe that * you are okay with being jobless for a while Then you have room to negotiate. My off-the-cuff guess, assuming those conditions are true, you could get something between what you are offered now and what you agreed with HR. Negotiation is always risky if you are completely bluffing. I do raise my eyebrow: academic markets tend to be pretty competitive. The second condition above I expect to fail. The only mystery to me is why they felt doing a good cop, bad cop routine and then offering you a low salary was somehow more effective than just offering you a low salary, which you would also have had to accept, because your market rate is still abysmal. Perhaps this is a sign they aren't as sure of their low offer as they project or perhaps someone read some crappy self-help "negotiate your way to being your own CEO" book and implemented it as policy, I couldn't tell you. > > On the other hand, if I accept the offer from A, it's not ethical to withdraw and join other schools later on. > > > It is ethical to stall. It's hard to stall for several months, but you may use the fact that *you* are willing to stall and *they* would like to get on with hiring their recruit as leverage. Something that boils down to asking for a relatively modest pay increase or else you would prefer to wait longer. Upvotes: 3 <issue_comment>username_3: > > After A is reached, HR will say, this is now my proposal; department still has to approve my proposal! Then comes the department head, which reduces the compensation further, to proposal B, likely to be the MIN value! The department head states that this is the final offer and cannot be negotiated. > > > I would treat this offer from HR as what it is, a non-commital, non-binding declaration, but nothing more. To me, this has the same character as an applicant stating that "he is certainly interested but needs to clarify with [whomever]". Just like the department would be asking for a stronger commitment, you are in the right to ask for an actual offer before accepting anything. It is **not** stalling to say that you will not be making a final decision until you have a final offer - it is just good sense to do so. I would not consider it unethical to renege from the negotiation in the second stage, if HR presents the first offer as fixed and the department head later on does not come through. > > The department head states that this is the final offer and cannot be negotiated. > > > People like to state this a lot. It is usually not true. Also, if you actually have other offers (like you assume you do), *every* offer gives you at least 2 choices - take it, or go for another school. It is not like you have to accept whatever "final" offer they give you. Upvotes: 3 <issue_comment>username_4: While I do appreciate the concern related to the competitiveness of the academic market, I'm especially concerned at the idea of taking an offer to work at a place where unethical business practices are accepted as the norm and you are just supposed to take it or leave it. While I think you can be the only person to choose what's right for you and your family, if you are in the personal position to be able to say "no thanks", even if you might not get another offer - then I'd strongly suggest you not be manipulated into a sort of "just be happy you have a job" situation using strong-arm tactics. You can push back or walk away - or pretend to, as JeffE says - but I would not suggest you just roll over and take it if you don't have to. Why is this a problem? [This Is Called Low-Balling](http://changingminds.org/techniques/general/sequential/low-ball.htm) ------------------------------------------------------------------------------------------------- > > First make what you want the other person to agree to easy to accept > by making it quick, cheap, easy, etc. > > > Maximize their buy-in, in particular by getting both verbal and public > commitment to this. > > > Make it clear that they are agreeing to this of their own free will. > > > Then change the agreement to what you really want. The other person > may complain, but, if the low-ball is done correctly they should agree > to the change. > > > The trick of a successful low-ball is in the balance of making the > initial request attractive enough to gain agreement, whilst not making > the second request so outrageous that the other person refuses. It > nevertheless is surprising how great a difference there can be between > these two requests. > > > The fact that HR specifically told you that you are in "full charge"? Yeah, that's part of the low-ball technique. They are all playing the same game, together. HR is the good cop, the helpful salesman who really wants to get a great deal. > > Guuguen and Pascual (2000) found it to be important that the person > believes that they have made a free and non-coerced agreement to the > first request. In particular adding 'but you are free to accept or to > refuse' to the first request increased compliance. > > > First, they put out a feeler - asking for a range. That's ethical, honest. Then they get you to show interest, negotiate, spend some time - make a commitment. OK so far. Then you come to an agreed upon decision...that's where honest negotiations end. The "gee, I'd really like to do that, but my [boss/sales manager/department head/dean/provost]"...[that's normal for a used car dealership in Fargo](https://www.youtube.com/watch?v=E5gwc4UizUc). How about they offer you a clear coat upgrade to go with being tenure track - just 10% off your salary. It's a real steal, we don't normally do this, but you seem like a real smart guy. Please. Let's Talk Payments ------------------- If they want to negotiate, by all means if you are willing and able to negotiate, have fun. Yes, you do need to be able to say no, and perhaps you need to get past embarrassment and contact that previous offer or stall so you can have the comfort to mentally know you can turn this offer down no matter what. "Gee mister, I sure do like your school, but since I originally talked with HR I've had to consider some very compelling other options [like telling you to shove it]. I feel bad about this - truly - but I think my original agreement with HR for X was lower than I should of been willing to accept. It seems like X+10% is really what I should have agreed to. This is within the range we had originally discussed, and I feel bad about bringing this up now, but I wasn't aware of these other options at the time." Maybe, you feel awful bad about this - but you suppose, if it would just be too terribly untoward to want more than HR originally OKed, the department being so great and the department head being clearly such a great guy, you'd be willing to accept something more like you originally agreed to with HR. Or you can be angry and make it clear that you can't negotiate with such a thing on the table - the walk-away. But you have to be actually willing to walk away. They seems to want to play games, and turn-about is fair play. But you have to be willing to turn down the offer and do something else, period, or it won't work out so well. Still, I'd warn you that they are telling you now - far in advance of joining - that they are willing to negotiate all the way to playing unethical games with their offers and candidates, including wasting their time and reneging on phantom offers. If your facts are correct, this is not a accident - it's a design. Personally, of all the jobs I've had I've only had one employer of any kind - academia, small business, or large enterprise - try to low-ball me like you describe here. And I didn't even try to negotiate that one - I just silently thanked them for letting me know they think it'd be OK to play games with my livelihood before I joined, and I got out of there with all haste, so I can't say how "winning" might work out. Upvotes: 3
2016/01/04
1,043
4,566
<issue_start>username_0: I was graded unfairly in a communication class. I participated more than some students yet the professor said she couldn't remember me speaking out in class. I asked her if she has any record on that and she said it would be hard for her to record if everyone speaks out Some of the students who were less active were given better marks than those who were more active, it was inconsistent. There were 23 students in the class and 13 classes in total. By the end of the class, she could not even remember the names and faces of each and everyone. How should I approach this?<issue_comment>username_1: Check if your school has a **grade appeal process**. Many schools in the US have one. It will bring in some other administrators (e.g., chair of the department or dean) to review your argument and assess the professor's rationale for giving you the grade. You might have a good chance with this since she has no written justification for your grade or other student's grades and relies only on her memory. Additionally, if you can remember what you spoke about during the class on specific days (and the students who spoke immediately before/after you), you might want to go ahead and write that down so that you can bring it up with your appeal. Upvotes: 4 <issue_comment>username_2: You are right, a mere statement like *I couldn't remember you speaking out in class* is not a solid remark to make by a professor in this particular case. But your statement *I participated more than some students* have no objective basis too. First check your institute policy on exam evaluations. In general, your institute will less likely to allow a re examination based on a student response which is individualistic at the same time based on subjective grounds. Your professor's word is probably going to be final. But, if the bulk of the students feel the same way that marking is unfair, then things are different. If that is the case, the best way is to bring this issue up ***together*** to a competent authority. Upvotes: 2 <issue_comment>username_3: There are several issues to address in this story. **1. Is the grading really based on what the instructor remembers?** Remembering *who contributed during the lectures* is like remembering *who answered the questions right in the midterm*. The latter seems foolish since there is a hard proof that you have attended the midterm, written down the answers and there is an answer sheet. However, the former can also be kept as a document (for instance, putting a mark near the student's name on the attendance sheet), and can be checked later on just as a midterm exam paper. In this case, it is unfortunately your fault not to keep track of participation points. You should have appealed against the method your instructor follows from the beginning and made sure that every contribution you have made is recorded. **2. Your instructor might be giving marks on the worthiness of the comment made** Not every comment during the lecture is a contribution. For instance, in a math class, an instructor tells that two plus two is equal to four. A student who says "Oh, than two plus three equals to five!" might not have made any contribution, but a student asks "Why is two plus two is equal to two times two?" might have asked a good question that improves the course flow. **3. You might think that the other students were inactive but they were actually more active than you** Inline with the second issue, some of the students -- who got more participation marks -- might have asked really interesting questions for the sake of the course subject, but you might have missed them. **4. There are witnesses** If you truly believe that you desire the participation grade, then you should talk to the other students that also think just as you. A bunch of them will probably be at your side when you appeal to the grading. **5. What is the university policy?** Remember that an instructor can always tear apart a midterm paper with the attendance list, and claim that the student has never attended the midterm, and the attendance sheet is lost. Is there any rule against this situation? If so, then there should be a rule against "remembering the contributors". What I am trying to point out that reading a story subjectively would be misleading. Since it is you who takes a low grade, you as well might be over-subjective about the issue. If you are not a senior student, then you can always take the course for improvement and take a lesson out of this incident. Upvotes: 1
2016/01/04
932
3,924
<issue_start>username_0: I wonder whether titles in the reference section be in title case, given that the publication venue's style files and publication instructions do not specify. For example, which of the following should I use? > > [Collobert2011] <NAME>. 2011. Deep learning for efficient discriminative parsing. In *International Conference on Artificial Intelligence and Statistics*, number EPFL-CONF-192374. > > > or > > [Collobert2011] <NAME>. 2011. Deep Learning for Efficient Discriminative Parsing. In *International Conference on Artificial Intelligence and Statistics*, number EPFL-CONF-192374. > > ><issue_comment>username_1: If the venue doesn't specify, then you should pick a style and stick with it. The venues I publish in tend to specify, so I just use the BiBTeX style associated with the venue (e.g., IEEE). Upvotes: 3 <issue_comment>username_2: If they don't specify, then I would look at previous publications from that conference/journal. If that doesn't give you a consistent answer, then just pick one. The styles I have used all take care of this for me. For example, ACM, IEEE, and APA appear to want references to be in sentence casing. Upvotes: 3 <issue_comment>username_3: If they don't specify the format, then I would tend to follow the capitalisation used by the authors of the papers. The rationalisation for this is that I think the capitalisation should be considered an integral part of the title of the paper, and that in the absence of any overriding format specified by the publication venue, you should respect the preferences of the authors of the papers you're citing (in the same way as you should respect the way in which they write their names). However: * This is inevitably a subjective answer (it's not really possible to answer this question objectively). * Doing what I'm advocating takes somewhat more work than imposing a capitalisation scheme of your own choice (because you have to manually hardcode the title capitalisations for all the papers you're citing, rather than just downloading the BibTeX citations and choosing a style). * I'm aware that some people dislike this scheme because the references then look inconsistent, even though that accurately reflects the inconsistencies between different authors. * I'm aware that there are many people (perhaps the majority) who don't view the capitalisation as being integral to the title of the paper. I don't agree with this view, but it's not unreasonable. * As others have pointed out, there are problems with this scheme when a venue imposes an all-caps style on the titles of its publications, because you then don't know what the original authors intended. In that case, I would probably make a subjective choice, e.g. using title case, but there's no real reason for doing that beyond personal preference. Upvotes: 2 <issue_comment>username_4: In the foreign languages, we often deal with conflicting capitalization norms. For example, English does title casing for titles, but Spanish does sentence casing. Portuguese switched about two decades ago from sentence to title. There are three approaches and both are relatively common enough: 1. Apply consistently the capitalization style used in the primary language of the paper. 2. Apply the capitalization style on a per-reference basis, according to the norms of the language used. 3. Refer to the titles verbatim, regardless of current capitalization norms in the language(s). None of the choices are really wrong, as there's very valid reasons for using each. That said, the second option is the most common one that I see, with the third being reserved for titles where the capitalization is seen as for some reason important/significant. When in doubt, see what others who use your same style guide or journal tend to do. If you don't see consistency, choose one and do it consistently for your paper. Upvotes: 2
2016/01/05
1,203
5,460
<issue_start>username_0: It is quite common for me to receive emails from the editors of a journal asking me, before or after the referees have given their approval, to review and improve my English. Then, the editor suggests a professional English editing service for this purpose. My question is: Is this common practice, independently on the quality of the English of the manuscript (as an attempt to sell a service), or is it only for the cases where the English really needs improvement? Here is an example of such an email where the reviewers had no further comments and recommend acceptance, but the editor asks for an English review: > > The reviewers judge the technical content of your revised manuscript satisfactory. The English, however, is awkward, and needs improvement. Reviewer comments are included below and/or are attached. > > > The language quality must be improved. We advise that you seek assistance from a colleague or have a professional editing service correct the language in your manuscript, which can then be resubmitted to us. > > > AIP and the JAP recommend Edanz for authors who wish to have the language in their manuscript edited by a native-English speaking language editor who is also a scientific expert. Edanz is a global editing service with offices in Japan and China. Use of an editing service is neither a requirement nor a guarantee of acceptance for publication. Please contact Edanz (<http://www.edanzediting.com/aip>) directly to make arrangements for editing and to receive a quotation regarding price and time. > > > Please edit the ENTIRE paper. > > > Please indicate how the manuscript has been revised. Either include a list of changes that addresses each point indicating how the manuscript has been revised as a separate document titled, Response Letter or submit a copy of the manuscript with the exact locations of the revisions titled, Marked Manuscript. That will enable the editors to see whether you have complied with the reviewer comments. > > ><issue_comment>username_1: No, this is not normal. I have never received such a suggestion despite the fact that neither I nor any of my co-authors were native speakers of English, and there was no reason (such as name or affiliation) for any editor to assume this. I also never heard of anybody else receiving such a suggestion (which does not mean much, however). In particular, I did not receive such a suggestion when publishing with the same publisher (AIP). Moreover, the first paragraph of your example mail does not seem to be an automatically generated or canned text block to me. Such text blocks are usually more diplomatic and would not contain words such as *awkward.* (The rest of the mail seems to be a prepared text block, however.) Upvotes: 5 <issue_comment>username_2: In one journal where I am familiar with the editorial workflow, the review form explicitly asks reviewers to rate the language quality of the manuscript. Based on this rating, the editor can tick an item similar to "Needs language revisions" when putting together the decision letter. The decision letter will then contain a paragraph that advises the authors to do a language revision. I am not sure at the moment whether a particular service is being recommended there. The point is that such a recommendation can get into the letter easily, but will not be included by default. Probably the editor only wrote the first two or three sentences of the letter, maybe without thinking carefully about the exact formulation, and the rest is based on a customizable template. Nevertheless, it usually means that at least one reviewer was criticizing language usage, maybe even without giving specific comments on it. I would advise you to at least double-check on language usage, and if possible have it proof-read by someone else with very good English skills or a native speaker. However, as long as the reviewers can understand the technical content well, these points are usually not decisive for the acceptance of the manuscript. Especially, as long as any language problems are corrected, I can't imagine that the editor will care whether you use the suggested language service or not. Upvotes: 6 [selected_answer]<issue_comment>username_3: Normal or not (and it's abnormal), this practice is problematic because it's unclear if the editor has vested interest to recommend this particular editing service. There are many professional editing services, why just Edanz? If there is any agreement between the journals and Edanz in the form of a commission or kick-back, then I will not trust the judgment of the editor on my written English, as he/she will be inclined to be more stringent or even unreasonably stringent. Moving forward, if that "awkward" troubles you, it may be advisable to seek help from a professional editor who is not related to Edanz. While many comments here praise your English, writing a question and writing a manuscript are of two different leagues and there could be grammatically correct but unconventional expressions in your work, so don't take those praises as a proof that your paper does not need to be edited. Meeting with an editor allows you to get a general scope of the problems, if any, and also provides an excellent chance to evaluate your overall English usage. When replying the editor, you may also indicate that you have sought help from a third party editor to edit your work. Good luck. Upvotes: 4
2016/01/05
347
1,569
<issue_start>username_0: Generally, it is a requirement to add CV in the graduate application, where they suggest to add professional experiences, journal articles info and research works details. The problem is that, I don't have any journal papers and also I didn't do job. I did a project in masters and I have experience to do thesis works. Therefore, all the information is contained within one page in the CV. Will it be considered as a bad CV as it doesn't have a lot of info as they expect? NOTE: I didn't add any test score on the CV because I wrote these info on the application.<issue_comment>username_1: CV's come in many formats. Some will list "research experience", where you can list your supervisor and describe the project you worked on. You may also want to list workshops or training you have taken outside of your academic coursework. Awards and other recognitions should also be added. I would suggest that you search online for examples of CV's or ask someone from your cohort for an example of their CV to get more ideas and examples. The idea of the CV is that it is a place to elaborate on achievements and experiences that would otherwise not be noted in your application. Upvotes: 0 <issue_comment>username_2: It definitely isn't unheard of to have a short resume/CV coming out of undergrad. Also, having just one page is perfectly fine and is actually about what I would expect from most people. If you must fill space, I usually recommend adding course projects or personal projects relevant to your field. Upvotes: 3 [selected_answer]
2016/01/05
534
2,297
<issue_start>username_0: I'm thinking of applying for a PhD in Quantum Computing at Oxford. Suppose I later on decided that academia wasn't right for me. Would such a title still be useful for looking for jobs in different fields or would it be a waste of 3 years that would make me 'overqualified'?<issue_comment>username_1: Many types of companies look for Ph.D. holders from highly ranked institutions, even if they won't work on their research area. Think about it: this type of people have gone through a rigorous merit-based selection process; they have been trained to crack hard problems that require creative solutions; and they are used to going through a process of independent study, deep thinking, and experimenting to get results. It is not uncommon, for example, for financial or consulting firms to tap onto that kind of talent. Many R&D companies will also hire people with a variety of advanced degrees. Upvotes: 2 <issue_comment>username_2: Maybe -- but you should certainly understand that a far more common case is to take a STEM Ph.D. and turn them into a quant, a data scientist, or possibly a computer programmer or software engineer. A business analyst or management consultant is also not unheard of (although this will probably take some passion on your part; you're less likely to find a red carpet at the end of grad school without business skills in their own right.) The financial sector loves math Ph.Ds but you might find it a bit unrelated to your super-specialty (you get to do math at least). It's not a bad way for a career to work out, but if you're more motivated by $$ than spending several years of doing science without much $, there are much more efficient routes. That is, people who land great jobs after getting their STEM Ph.D. could very, very likely have been further along in their career if they went straight into industry. YMMV. Upvotes: 2 <issue_comment>username_3: A PhD says you know what 'research' means -- I.e., not the undergraduate definition. That's what companies look for as opposed to a specific topic which I doubt any companies will be interested in --- a PhD is supposed to attack problems beyond the current horizon. A PhD from a reputable institution provide further evidence your research will be of high quality. Upvotes: 1