date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2022/12/07
| 757
| 2,923
|
<issue_start>username_0: I am really paranoid/scrupulous about plagiarism and I would like to ask if what I did below counts as some form of plagiarism. A while ago, I wrote a paper and basically I don't know if the very last sentence in my conclusion sounds too much like the very last sentence of somebody else's article. My paper has gone through the plagiarism software, so it isn't like I copied and pasted their sentence; moreover, I did not just go and change a few words in the sentence either.
Here is how the sentences sounded like:
Their final sentence:
>
> His anger did not lessen with age any more than did his habit of
> criticizing other composers. As he continued to argue until his
> passing in 2002, it was *his* work.
>
>
>
My final sentence:
>
> What is absolutely certain is that, for <NAME>, his work
> remained his work and this was something he never allowed his fellow
> composers to forget.
>
>
>
Is it plagiarism to have a similar concluding sentence as somebody else *and* also have it positioned in a similar place in the paper? Although I cited this person throughout my paper, I didn't think I needed to cite this final sentence in my conclusion because it was just a general observation that could be gleaned/concluded from the argument in the body of my paper.
I mean if you wrote a paper on the causes of WW1 and discussed throughout how the naval arms race was a contributing factor, you wouldn't have to cite the following statement in your conclusion 'Thus, the naval arms race was a significant factor in the cause of WW1' even though many scholars have already said this, right?
I think I am just idiotically overthinking all this, but I am so unsure; I just get so paranoid about accidentally plagiarising and getting in trouble.<issue_comment>username_1: No. it is not plagiarism.
If, in addition to the last sentence, the rest of your paper is just a re-written version of "their" paper, then maybe there is a cause for concern.
Upvotes: 4 <issue_comment>username_2: If you do a separate, *independent*, analysis of a situation, cite the original analysis, but come to the same conclusion, it isn't plagiarism. You aren't claiming their ideas as your own. You are just reinforcing their work.
If the analysis is independent then this is especially valuable to scholarship as it supports and reinforces those conclusions. This is how we arrive at something like "truth".
Upvotes: 3 <issue_comment>username_3: This isn't plagiarism so long as you properly cite the other paper at the appropriate point in your paper. You stated a fact -- that means you need to cite where you found the fact. This is a paraphrase, so you don't need quotation marks.
If you've developed this idea earlier in the paper, properly citing your sources, and you're just putting this last line in as sort of a synopsis, I don't think you'd need to re-cite here.
Upvotes: 3
|
2022/12/07
| 8,587
| 32,972
|
<issue_start>username_0: Being from Russia, the idea of paying 5-6 digit sums for a university education sounds ludicrous. I understand that prices in the US are different but surely there have to be cheaper options. Besides, if your education operates like a free market, one would expect that at least some universities would choose to offer their educational services at a lower price. Do any? If so, why aren't they popular? Are there options for Americans to get your university education for free?<issue_comment>username_1: As for the title of the question, why it costs so much: A fundamental difference between the US and other countries (or the US several decades ago) is that the rest of the world believes that education is a communal responsibility that should be paid for in large part by tax money. As a consequence, many countries have no tuition costs to students at all -- for example, during my college years in Germany, I never spent a single Deutschmark on paying for college: it was all tax-payer funded.
In contrast, contemporary American society has come to accept that taxes are bad and that many things are an individual responsibility. States no longer pay a substantial share of universities' operating costs, and so the majority of what it costs to provide students with an education needs to necessarily come from students themselves. That adds up to thousands of dollars per year per student. (A back-of-the-envelope calculation will show that that is not much more than it is in countries like Germany, for example; the difference is only who pays for it.)
Given this, it makes sense that you may be able to find opportunities to get a college education for more or less money, but you will not find much if you're looking for "essentially free" education. It simply costs $s to educate students, and someone has to pay for it whether that's at Harvard of Podunk Community College. The only approach you can take is to be good enough to get a scholarship: It that case, your education of course still costs money, but it's not you but some scholarship fund that will pay for it.
Upvotes: 4 <issue_comment>username_2: There are many reasons. The basic one is that higher education isn't fully funded by the government through taxes. Some universities get substantial funding from individual states (funded by income taxes), but not enough to cover all costs. In those universities, tuition isn't completely covered, but it is lower for residents of that state (taxpayers) than for others.
Many (actually most) universities are private and receive only limited governmental funding so much of the tuition is borne by the students. Many universities (private and state) give out some scholarship aid to a minority of students. This can reduce the cost considerably, even to zero. Some states give (or gave, in the past) scholarships to residents even for study at private universities.
Free higher education comes for only a few. Some athletes and some financially disadvantaged students primarily.
Most universities have many revenue streams, but a lot of that is to support research, not teaching, specifically. But even with the high cost of tuition, the student pays only a fraction of the total cost in all but a few "for profit" colleges and universities that tend to have special missions and, possibly, lower standards.
Funding in the US for education generally is very poor and it comes from some crazy sources. Property taxes, in particular, ensure that if you are poor and live in a poor area then your primary and secondary education will be poorly funded since most of it is local funding. We don't seem to be able to break that habit. That isn't quite the same as for higher education, but, some states fund higher education poorly.
---
For completeness, I'll note that all universities cost a lot - everywhere. The difference, really, is who bears the cost. In the US, students bear a lot of it. If faculty are poorly paid and exploited somewhere then they do. A sensible (IMO) system recognizes that an educated citizenry is a benefit now and for the future and finds a way to spread that cost accordingly and appropriately.
Upvotes: 3 <issue_comment>username_3: Unlike Russia, there is a more clear perception of a more prestigious university = higher station in life. This does not always work out as expected (see the current outrage about student loans). The education model is fundamentally different because of reasons having little to do with academia and a lot with economy.
In Russia, taking a bank loan (mortgages and unforeseen urgencies aside) is typically a pretty bad idea. In the US, the credit score is important, and borrowing money one way or another is viewed as one of the pillars of capitalism: $100 now is better than $100 a few years later. The idea of taking a loan from the government to invest in yourself to pay it back many years later makes more sense in that financial system. Actively managing their finances is just not something most Russians do (surely not to an extent Americans seem to do that). And yes, there are scholarships, but they are pretty rare (on the flip side, purely academic careers able to sustain a small family are also exceedingly rare in Russia).
Both have their pros and cons. One of the notable deficiencies of the current Russian system is the perverse incentive to get a coveted diploma no matter the actual education provided, sometimes viewed as 5-6 years of largely wasted time. Soviet-era tekhnikums are now proudly calling themselves "institutes" or even "universities" because that is what moves the most tax money through them. Again, reasons for that are economical - the alignment between the education system and the industry is poor, unlike some European countries. The same is true about the US, but in a different way, and you have people complaining about massive debt they are never able to pay off.
And yes, there are cheaper options, but you graduate from Harvard and are easily on your road for six figures annual; you graduate from Podunk and are barely making the ends meet. So "free market" dictates how much one could fleece each social class and market to them accordingly. There is little incentive to facilitate social mobility for universities - this is something a government should be handling, and it is rather hands-off in the case of the US.
Upvotes: 3 <issue_comment>username_4: There are many factors.
First, we should address the difference between price and cost. Price is what the student pays. Cost is what the university spends. In most of Europe, the cost is much higher than the price. In the US, the price is generally closer to (but still below) the cost. This is because, in Europe, the government foots a much larger percentage (and - in some countries - all) of the cost.
There is still a difference in cost - the US spends between 1.5 to 2 times as much per student as Germany. Why the difference?
One difference is that salaries in particular for knowledge workers are higher in the US. Google is now paying its US software engineers with a few years of experience close to $200K. As I understand it, Google does not pay its software engineers in Germany nearly that much. The university salary is not going to be as high as Google's, but professors who could become software engineers (and most of them could learn to be, even if they aren't computer science professors) are, on average, only going to accept so much of a difference in salary to be a professor.
(For that matter, graduate students at the University of California (all campuses) are currently on strike, demanding to be paid at least $45K a year, because rent in California is expensive and it costs that much to live there! This is going to basically ruin that university system, because there is no way this doesn't completely break the budget, but they're also not going to be able to attract graduate students with what they're paying now since, with the recent inflation, that will require being homeless.)
A second difference is that the average US student is more expensive to educate. In Germany, roughly only the top 50% of students go to university, and only 50% of those who go graduate. In contrast, in the US, students with all levels of high school achievement go on to a tertiary education system that attempts to teach them university-level material, and the system aims to graduate all of its students (though it doesn't come close). Part of the difference comes from a difference in standards - a US university graduate is not expected to know as much or be as capable as a German university graduate. On the other hand, the average US high school graduate knows much less and is less capable than the average German high school graduate, and the difference is even bigger when you consider the university-going cohort. This means US universities are teaching more, to students who need are slower learners, than German universities. It's not surprising US universities spend more.
Finally, the US has a fair number of elite universities that spend a lot of money on research and on various luxuries for the students (and staff). (When I visited a Canadian university, I was shocked by how crowded the library was compared to even relatively frugal non-elite US universities. More square feet in the library cost money! Or go to a French university and realize that roofs that don't leak are a luxury.) They are a small percentage of universities, but they do spend so much that they do drive up the average by a noticeable amount.
Upvotes: 5 <issue_comment>username_5: Examining cost drivers in an industry is a complex economic task and making comparisons between countries is also quite complex, since there are many surrounding differences in other aspects of the economy. There have been a number of studies examining the higher education sector with general equilibrium modelling (see e.g., [Chatterjey and Ionescu 2012](https://onlinelibrary.wiley.com/doi/abs/10.3982/QE100),
[Jones and Yang 2016](https://www.journals.uchicago.edu/doi/abs/10.1086/684856?af=R&mobileUi=0&),
[Abbott *et al* 2019](https://econpapers.repec.org/article/ucpjpolec/doi_3a10.1086_2f702241.htm)
and [Athreya and Eberly 2021](https://ideas.repec.org/a/aea/aejmac/v13y2021i2p168-213.html)). There have also been several reports summarising cost drivers in higher eduction in the US (and elsewhere), with particular attention to the rising (real) costs of university since the 1980s. Reports on this topic include [<NAME> (2011)](https://rads.stackoverflow.com/amzn/click/com/0190214104), [<NAME> Feldman (2016)](https://scholarworks.wm.edu/cgi/viewcontent.cgi?article=1875&context=aspubs), [<NAME> (2018)](https://www.mhec.org/sites/default/files/resources/mhec_affordability_series7_20180730_2.pdf) and [<NAME> (2021)](https://aaronhedlund.github.io/research/hetero.pdf).
The US higher education sector is a mixed economy composed of public and private universities. There are extensive public subsidies throughout the sector, including an extensive system of subsidised loans. The sector operates in parallel with a number of large technological and industrial sectors in the US that have seen rapid growth in the last few decades. I am not sufficiently familiar with international differences between the US and Russian higher education sectors to offer a point-by-point comparison, but here are a few of the main cost drivers for US higher education that are identified in the literature:
* **Cost disease:** One of the drivers of increasing cost is [cost disease](https://en.wikipedia.org/wiki/Baumol%27s_cost_disease) (also known as the Baumol effect). Due to labour competition between different sectors of the economy, the labour cost for higher education workers has tended to increase faster than increases in labour productivity in the sector. For example, the rise of competing technology fields in STEM (e.g., "big tech") can drive competition for highly educated technical workers, which would drive cost disease within the higher education sector. Since labour costs for higher education workers are high, *ceteris paribus* the resulting supply curve is pushed towards higher prices.
* **Less direct government funding (than other countries):** In the US system there is direct funding of public universities by state governments, with a mixture of public and private universities. However, governments fund a smaller proportion of the sector than in many other countries. In some countries the cost of higher education is borne primarily or solely through taxation and so the service is "free" to the consumer. Since direct government subsidies defray the "sticker price" of a service to consumers, *ceteris paribus* having less direct funding from government means a higher cost for the consumer.
* **Subsidised/govt-backed loans for education:** Most students attend university using student loans. These include loans from the Federal Government and private loans. The US Government provide a range of [direct loans](https://en.wikipedia.org/wiki/Student_loans_in_the_United_States#Federal_loans) up to a limit and then beyond this student can seek [private loans](https://en.wikipedia.org/wiki/Student_loans_in_the_United_States#Private_loans). As is the typically the case in economics, when the government subsidises or provides loans for a costly activity, *ceteris paribus* the demand for that activity increases and so does the price. The "username_5nett Hypothesis" (named after Secretary of Education William username_5nett) is that increases in financial aid and government funded/subsidised loans caused the universities to raise their tuition and simultaneously increase capital spending on amenities to attract students, knowing that the loans would increase demand in the sector and defray costs onto the government.
* **Amenity competition and high capital spending:** Because student loans are provided by the government this substantially increases demand for higher education and induces high levels of competition for students. US universities are known to have responded to these changes by engaging in high capital spending to provide luxurious amenities (e.g., stadiums, sports facilities, pools, climbing walls, student lounges, cafes, etc.) to students to augment their education. This has generally been considered to be a simple way to compete for students when faced with high inelasticity of education quality. A number of critiques of the high cost of the higher education sector have noted the "gold plating" of US univerity campuses due to amenity competition. This is largely a secondary effect of having a market with a high level of government loan-subsidisation to attract students.
* **Increase in administrative roles and associated labour costs:** Higher eduction in the US has experienced a large increase in non-academic administrative staff since the 1980s. Some commentators regard this as a development reflecting an increasing "standard of care" in universities while others see it as "administrative bloat" that does not increase the standard of care or even a broader aspect of an ideological capture of the universities that decreases the standard of care. Regardless of how it is characterised, there has been a huge increase in labour costs in US universities due to proportionate increases in non-academic administrative workers who work in support roles in the university.
Upvotes: 6 <issue_comment>username_6: Among many other reasons:
In the US, employers pay for employee health care. In many other countries, this is not the case. In the US, the health care system is exceptionally inefficient and prices are rising fast. Universities are hit particularly hard by health care costs because they employ many older staff. Those costs are passed on to students.
Health care costs are particularly significant to students because, while universities can cut staff pay and defer capital expenses, it is quite hard to cut health care costs.
Upvotes: 2 <issue_comment>username_7: Many good answers here, so I'll just address one aspect that they haven't:
For the top-tier Ivy Leagues, being expensive is the point. They are, practically speaking, giant hedge-funds with a university attached to them.
Harvard's endowment is [$50 Billion](https://www.harvardmagazine.com/2021/10/harvard-endowment-surges-11-3-billion-university-surplus). Princeton's is [$40 Billion](https://3quarksdaily.com/3quarksdaily/2022/10/malcolm-gladwell-princeton-university-is-the-worlds-first-perpetual-motion-machine.html) That's more than the GDP of many small/poor countries. They don't need to charge fees:
>
> After a stellar year in 2021, Princeton University has an endowment of
> $37.7 billion. Over the past 20 years, the average annual return for
> the endowment has been 11.2 percent. Let us give Princeton the
> benefit of the doubt and assume that at least some of that was luck
> and maybe unsustainable, and that a more reasonable prediction going
> forward would be that Princeton can average a return on its
> investments of an even 10 percent a year. That puts Princeton’s
> endowment return next year at roughly $3.77 billion.
>
>
> Now—what is Princeton’s annual operating budget? $1.86 billion. The
> arithmetic here is not hard. $3.77 billion in investment income minus
> $1.86 billion in operating expenses leaves you with $1.91 billion.
>
>
> Princeton could let in every student for free.
>
>
>
The fact that Princeton charges [$80K/y](https://admission.princeton.edu/cost-aid/fees-payment-options) for undergrad, is not because they need the money, but because it signals prestige and exclusivity and there are more than enough takers. In fact, its peanuts, since wealthy people make multi-million dollar "donations" to ensure their children get in. This applies to all the Ivys.
(P.S. that rate of return is really high, but Ivy League Endowment rates of return have always been significantly higher than normal. There's no hard proof, but it's widely suspected that in addition to the donations, big-shots in the business community pay with knowledge that is used for insider trading by endowments.)
Upvotes: 4 <issue_comment>username_8: I'm rather surprised I didn't see any other answers addressing the premise that
>
> education operates like a free market
>
>
>
**because education is not a free market**.
In a free market, by definition, there are low barriers to entry. This is not the case in the education market. There are a large [number of very high hurdles](https://www.linkedin.com/pulse/how-start-new-university-united-states-my-experiences-babu-george/) a new educational institutional must pass in the United States to even gain access to the market, including:
* Obtaining quality faculty
* Putting together a curriculum
* Physical buildings, not just for teaching, but also for dorms and extracurriculars. These are not strictly necessary, lack of physical presence has a large impact on perception of quality.
* Permission from the state's department of education (This alone will take 2+ years)
* [Obtaining accreditation](https://www.chea.org/about-accreditation) (Regionally or nationally, by a private nongovernmental organization like ABET)
* Certification from USCIS (to take foreign students)
* Specific program accreditation for things like law or medicine
Snuck into these hurdles is a rather nasty catch-22 where accreditation is usually denied to institutions that don't already have students. However, students are obviously wary of institutions lacking accreditation.
Upvotes: 3 <issue_comment>username_9: There are several factors, but in my opinion the biggest one is federally subsidized student loans which break the "expected"/"free market" behavior.
When my parents went to college in the US, it was seen as a privilege, for the highly intelligent, and not something everyone did (perhaps closer to how PhDs are viewed today). Many people could simply graduate from high school and make a fine living. College was also much more affordable, many people were able to pay for a large portion of their tuition by saving wages from working over the summers in high school.
As demand for more skilled labor grew, however, a college degree became increasingly required for many jobs and more people began to attend college. Eventually, college became seen as something "everyone" should attend. In response to this sentiment, the Federal government created a liberal student loan program to help enable anyone attend college who wanted to. Note that unlike most loans, student loans are not based strongly on credit-worthiness or the likelihood that you will be able to pay them back, nor can you default on them.
While a potentially laudable goal, some argue that this loan system had the opposite effect. Instead of universities keeping prices relatively constant, they quickly realized that since almost anyone can get a loan they could charge exorbitant prices and people would simply take out the "guaranteed" loan to pay for it. I don't know of a good data source off-hand, but I suspect there is a high correlation between the rapid increase in the cost of undergraduate education and the amount of federal funds put toward student loans (with federal funding leading rather than lagging the cost increase).
At least from the perspective of my parents, you are essentially now damed if you do (save carefully, don't qualify for student loans, then drain your savings to pay) and damed if you don't (don't save, qualify for student loans, then be stuck with crippling debt).
It's also worth mentioning that many of the top universities in the world reside in the US and there has been an increasing amount of foreign competition for admittance to US colleges. Not only does this exert upward pressure on the price, but many colleges charge a large additional fee to foreign students which acts as a further incentive to raise domestic prices.
Upvotes: 4 <issue_comment>username_10: Put simply.. **the US government provides government guaranteed loans**..
Since a potential student can get a $60,000 loan for education purposes only the colleges can feel free to charge them pretty much as much as they can afford to borrow.
If the potential student could not borrow $60,000 for a college loan the university would have no choice to lower the admission price (as few people could afford to pay $60,000)
Think about it. Without the current loan system.. could the universities charge this much money for college? How would students come up with the money to afford these high prices?
Upvotes: -1 <issue_comment>username_11: Some people answering the question before me have the incorrect impression that U.S. students have no choice but to pay high tuition for private universities.
Let us start with some [U.S. government statistics](https://nces.ed.gov/programs/coe/indicator/csb/postsecondary-students).
>
> In fall 2020, nearly all undergraduate students (95 percent) attending degree-granting institutions enrolled in either public 4-year (48 percent), public 2-year (30 percent), or private nonprofit 4-year (17 percent) institutions... The remaining 5 percent of undergraduate students were distributed across private for-profit 2- and 4-year institutions and private nonprofit 2-year institutions in fall 2020.
>
>
>
"Public" in the U.S. means educational institutions run by a government - not what it means in the U.K. and some other English-speaking countries.
Some random examples of public 4-year institutions are
* [Hunter College of the City University of New York](https://www.hunter.cuny.edu/abouthunter/)
* [University of California, Berkeley](https://www.berkeley.edu/about/)
* [The Ohio State University](https://www.osu.edu/about)
* [Rutgers, The State University of New Jersey](https://www.rutgers.edu/about-rutgers)
* [The Pennsylvania State University](https://www.psu.edu/this-is-penn-state/mission-and-values/)
Some examples of public two-year institutions (almost synonymous with community colleges) are:
* 12 2-year colleges in [Connecticut State system](https://www.ct.edu/cscu)
* 116 [California Community Colleges](https://www.cccco.edu/)
* [Borough of Manhattan Community College](https://www.bmcc.cuny.edu/) of the City University of New York
(For non-U.S. readers - many undergraduates attend a community college with the intention to transfer later to a 4-year school and get a 4-year degree. Not all succeed, but at a typical U.S. community college, young people intending to transfer often form the majority of the students.)
Some diverse examples of private universities are:
* [Columbia University in the City of New York](https://www.columbia.edu/content/about-columbia)
* [New York University](https://www.nyu.edu/)
* [Georgetown University](https://www.georgetown.edu/who-we-are/our-catholic-jesuit-heritage/)
* [Liberty University](https://www.liberty.edu/about/liberty-university-quick-facts/)
Indeed, the sticker prices of the tuition at most private schools seem like a lot. But, according to the government statistics above, nearly 4 out of 5 U.S. undergraduates choose to attend a public 4-year or a 2-year program, rather than a private one. Public institutions tend to be free or very low cost for the students, with the taxpayers covering most of the cost. For example, in the City University of New York or State University of New York systems, tution is free for students who reside in New York State and earn less than $125,000 a year ([details](https://www.hesc.ny.gov/pay-for-college/financial-aid/types-of-financial-aid/nys-grants-scholarships-awards/the-excelsior-scholarship.html)). Tuition is not free, but is still very cheap for students who each more than $125,000 a year, or come in another state or country.
We should note that in addition to paying (or not) for tuition, a student needs to eat something, and sleep somewhere, which in the U.S. can cost about $1-2,000 / month for 4 years. Many public and private institutions offer their students subsidized housing and meal plans - typically, not free, but cheaper than market price. Also many undergraduates are able to live on the cheap at home with their families while getting their bachelor's degree.
Nor, in general, are these free and low-cost public institutions difficult to get into, which is how free universities in other countries usually operate. For exmple, CUNY and SUNY have practically open admisisons. The statistics on the percentage of students who apply and are not accepted are not useful for these schools because (almost) any student who graduated high school in New York with a grade of B or better has to be admitted, and can usually get free tuition.
Most U.S. students are well aware that it is not necessary to pay for a private school in order to earn a bachelors degree. Only a little over 1 in 5 do choose to attend a private school, rather than a public one.
**EDIT:** I will adress some points raised in the question and in the comments.
Let us compare the above facts (widely availability of free and low-cost public institutions) with the picture painted by the Russian government's [Internet Research Agency](https://en.wikipedia.org/wiki/Internet_Research_Agency):
<https://almavest.ru/ru/node/1379>
English translation:
>
> *The United States, in the field of education, lacks many concepts familiar to us in the post-Soviet space...*
>
>
>
>
> *There are no federal "state universities" in the United States, because historically it turned out that education is not mentioned in the US Constitution - in 1787, the Founding Fathers of the United States, apparently, were not interested...*
>
>
>
>
> *All universities in the USA are paid, there is no free post-secondary education in the USA at all.*
>
>
>
<https://legalizuem.ru/blog/vysshee-obrazovanie-v-ssha>
English translation:
>
> *It is also worth considering that American education is considered one of the most expensive in the world; unfortunately, there is no free education in America...*
>
>
>
>
> *The minimum cost of studying at an American university is from $ 5,000 per year. On average, this is about 30,000 dollars a year.*
>
>
>
>
> Besides, if your education operates like a free market, one would expect that at least some universities would choose to offer their educational services at a lower price. Do any? If so, why aren't they popular?
>
>
>
There are some for-profit universities. As you can see from the statistics I cited, the vast majority of U.S. undergraduates attend public of private non-profit institutions. Here are a couple of examples of for-profit universities:
* [De Vry](https://www.devry.edu/about.html)
* [University of Phoenix](https://www.phoenix.edu/about.html)
For-profit universities can't compete with public universities by offering lower tuition, because public universities are heavily subsidized by taxpayers. Generalizing very broadly, most Americans don't view for-profit universities as a good option for earning a bachelor's degree. Some people view for-profit universities as sometimes a good option for taking some non-degree courses, e.g. learning a foreign language or a particular skill. Some for-profit universities are outright scams.
>
> Are there options for Americans to get your university education for free?
>
>
>
Yes, many. For example, as I explained above, most New York State residents can get into SUNY or CUNY (the admissions are almost open), and the tuition is free for most residents, and very low for others. Almost 4 out of 5 U.S. students choose this option.
At universities that do charge high tuition, many students receive some kind of grants or scholarships. I'd venture to say that it's more common nowI'! than paying the full sticker price tuition. For example, the beatnik author [<NAME> "Jac" de Kérouac attended Columbia University on a football scholarship](https://blogs.cul.columbia.edu/rbml/2019/10/29/jack-kerouac-played-football-here/) (go Lions!!)
A recent [New York Times article](https://www.nytimes.com/2022/12/14/us/college-universities-college-tuition-reset.html) says that some private universities are lowering their tuition sticker price to better reflect what students actually pay.
One common way to earn free tuition at expensive private universities is to volunteer to serve in the U.S. military. Then (with some restrictions), U.S. taxpayers will pay college tuition for you, your spouse, and dependends.
Some (not all) states have programs where the state pays some or all university tuition for people receiving public assistance (welfare, dole). Sometimes they pay for a very expensive private university.
>
> why is there a student debt crisis?
>
>
>
The truth is more complex than what your government's propaganda paints. For example, I know one young woman who borrowed over $100,000 as student loans, and spent it mostly on things like travel and clothes. She states that her plan was to marry a rich man who would pay off her student loans. As far as I know, her marriage plans have not materialized yet, but rather I and other U.S. taxpayers will end up paying for her student loans.
According to [CUNY's Hunter College](https://hunter.cuny.edu/news/hunter-jumps-six-places-in-u-s-news-world-reports-college-rankings/),
>
> *The median federal loan graduate indebtedness for Hunter students was just $12,500. (Hunter is proud that 75% of its students graduate debt-free.)*
>
>
>
>
> Is there a reason why more people don't go there?
>
>
>
I'm sure that the over 1 out of 5 students who attend private universities believe that there are valid reasons for their choices. For many students, the tuition cost is not a big consideration because someone else pays. Some students believe that the campus environment is more pleasant in a private university, than in a public one ([example](https://www.wm.edu/offices/wellness/campusrec/programs/outdoorrec/climbingwall/index.php)), or that they would get better education in some way, or that they would be more likely to find a good job or be admitted to a competitive graduate school after graduation.
But the claim that Americans wanting a bachelors degree have no choice other than to borrow hundreds of thousands of dollars to pay for private university tuiton is demonstrably false.
>
> Do these universities offer all majors?
>
>
>
That's definitely a consideration for some students. For example, some religious students attend a religious university (many Roman Catholic universities; Liberty - evangelicals, etc) because they want to study theology, and they don't want to do it at a public university.
For certain majors, such as cinema studies or forensic studies, some undergraduate programs are very well regarded, while most others are not.
Upvotes: 1
|
2022/12/08
| 663
| 2,315
|
<issue_start>username_0: ### Background:
Currently, I am trying to manually scrape the contents of a transfer map tool hosted on Arizona State University's website. I was pretty successful in deconstructing how the URL for the tool/webpage works, but found a strange naming convention for its colleges.
Specifically, the colleges are named using acronyms that don't align with the names of the schools.
For example, <NAME> is shortened to CBA and <NAME> is shortened to CES in the URL. As far as I know, ASU doesn't use these acronyms to name their schools.
However, <NAME> is ASU's school/college for business administration, and <NAME> is ASU's school/college for engineering. To add to that I have seen other Universities use these acronyms to name their colleges.
Here's the whole list of acronyms used by the tool:
[](https://i.stack.imgur.com/Nbvzy.png)
### Question (TL;DR):
I was wondering if these types of acronyms have a naming standard associated with them? Something for me to reference while scraping the site?<issue_comment>username_1: >
> I was wondering if these types of acronyms have a naming standard associated with them? Something for me to reference while scraping the site?
>
>
>
I feel confident in saying no, there is no such thing. Even within university systems, names are all over the place, for example "Schools of X" and "Colleges of Y" are sometimes the same level, sometimes not.
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
> Was wondering if these types of acronyms have a naming standard associated with them? Something for me to reference while scraping the site?
>
>
>
No. To provide an example building upon an answer by [<NAME>](https://academia.stackexchange.com/a/191375/33210), consider an *EEB* department.
These departments are typically macro-biology (often the old botany and zoology departments merged together). EEB can stand for several titles including
* [Ecology, Evolution and Behavior](https://cbs.umn.edu/academics/departments/eeb),
* [Ecology & Evolutionary Biology](https://eeb.princeton.edu/), and
* [Evolution, Ecology, and Behavior](https://biology.indiana.edu/graduate/evolution-ecology-behavior/faculty/index.html).
Upvotes: 1
|
2022/12/08
| 2,842
| 12,333
|
<issue_start>username_0: Americans spend 12 years in school which is more than in many places, yet, when they come to university, they continue to study a [general education](https://en.wiktionary.org/wiki/gen_ed) program that may be unrelated to their major.
My understanding is that it eats up a better portion of their first two years, and that students do not necessarily even apply for a specific major at the time they start.
Considering how much an average American student is paying I would have expected there would be a higher demand for more specialised programs with fewer unnecessary courses. Particularly since universities in the US are very sensitive to the demand for their services. Why doesn't this seem to occur?<issue_comment>username_1: The goal of American higher education is more civic and cultural than seems to be the case in Europe.
European students seem to apply to a university for a specific course, in a specific topic, and then take courses in that field nearly exclusively, for 3 years. This creates a high level professional in the chosen field. A physicist will learn what a physicist should know (and little else).
American students apply to a university, often but not necessarily with a specific course or topic in mind. Then they take approximately a year of courses that deal with things that American society believes (or believed) that all educated, sophisticated, or refined citizens should know. Then they focus in upon their specific field, and for the next three years learn the same things that their European equivalent learns. Which is to say that a physicist will learn what a physicist should know, as well as what every educated adult/citizen/person should know.
It is a difference of goal, or world view. At least originally.
There are perfectly fine arguments to be made that it continues only because of the force of tradition, helped along by the financial interests of the universities, and the desire of the students to have a "full" college experience. But we can make similarly cynical arguments about the European system, which might be characterized as churning out tools to be used up in specific jobs and roles, and not citizens. Cogs to be subsumed by a machine, instead of equals for a democracy.
But there is no particular reason to make either of these cynical arguments. They are different systems with different histories. That would be sufficient for me.
As far as whether Americans apply to specific courses or majors, [50 to 80% of American students enter University with a major](https://www.northcentralcollege.edu/news/2022/04/29/best-majors-undecided-students). Many, perhaps most, schools will accept applicants who are as of yet undecided, and students may change their major and course of study after entering a university. But the most common path is to apply to a specific university for a specific major or course. And for more prestigious or competitive majors, colleges, courses, &c, this will be a practical necessity.
Upvotes: 5 <issue_comment>username_2: I attended high school both in Europe and in the US (exchange programme), and my experience was that the US educational system is roughly "one step behind", timewise, compared to the European one. The academic level I experienced in the senior year of US high school was roughly comparable to my last year of middle school in Italy. From friends who came through the US college/university system, I also get the feeling that their first few years of higher education (the general-knowledge, no-major-declared-yet years you are wondering about) covered roughly the same range of topics and complexity as my Italian Liceo.
For example: the highest maths class I took in the US was Trigonometry (and even this was done mostly using a graphing calculator). There were harder classes one could choose, sometimes for AP credits, but you could also easily complete your HS maths requirements with an accounting class. In Italy, I had started calculus (non-optionally) years earlier. Likewise literature was very much at a pretty basic familiarity (rather than analysis) level and foreign languages were mostly films and vocab quizzes.
Now this is my own experience, and there were other aspects (the US HS was in a very rural location, and I did not pick some of the hardest classes as they were held at the local college and I couldn't drive). But all other exchange students from Europe had a similar experience and we all easily coasted on the knowledge we'd gained in our early teens, even with the language barrier.
As to why, I have a few theories, but I don't claim to have an answer.
For one thing, European education tends to stratify much earlier - in many countries, you choose at some point in your early teens whether to pursue a more academic, university-focussed path, or a technical qualification. To do this, you need to have a variety of schools available within range - much of the US is too sparsely populated for this. Stratification allows European schools to challenge the more academically oriented students at an earlier age and get them to University with a more solid background; it also means that important life decisions are made much earlier, and may be much more biased by factors such as family background and socioeconomic status rather than the student's inclinations and skills.
The focus on non-academic endeavours was huge in the US compared to Europe. Sports, extracurriculars, clubs, productions - all these had the same standing, if not sometimes more, than classes, and were allocated time proportionally. Even some of the classes were, effectively, clubs - like "yearbook editing". I am not making a value judgement here: do extracurriculars distract from academic pursuits, or provide a more rounded education? Is it a better idea to dedicate your teenage years to study? Sports? Art?
But a practical result was that US students reached university without a lot of the general knowledge that European students - especially those who went to the more academically oriented types of high school - had acquired at their age. The first few years of college/Uni provided that, for those who went.
Upvotes: 6 <issue_comment>username_3: >
> My understanding is that ... students do not necessarily even apply
> for a specific major at the time they start.
>
>
>
It's true that some students do not declare a major when they start their studies, but that's nearly always because they haven't decided on a major yet. It's not the standard practice. Students will often take courses of general usefulness to the handful of majors that they're considering, and then make a decision based on which subjects match their interests and aptitudes. I don't have any specific numbers for you but based on my experiences I'd wager that these "undecided" students are a relatively small percentage of the student population, and few remain that way longer than a semester or two. Most students have at least a *vague* idea of what they want to study. They might (for example) start as "general business" and then change to "finance" later, where the first few semesters look almost identical for both.
>
> My understanding is that it eats up a better portion of their first two years
>
>
>
This can vary greatly based on the school and on the specific degree program you're pursuing. Students often get some choice over which general ed courses they take, so they can be tailored to match their degree program if the student wishes to do so. For example, I was an engineering major and I chose a technical writing course to satisfy my "language arts" gen-ed requirement. During my entire college career, I believe I only had to take 2-3 general ed courses that didn't contribute to my major or career at least indirectly (although I still enjoyed taking them).
Also, most courses in a degree program require that the student has already completed certain other courses first. The number of prerequisite-free courses can be rather small, so many students have limited options for courses in their major for the first couple of semesters. My first two semesters consisted of one introductory engineering course plus the background math and science courses that I needed to unlock the bulk of the classes in my major. A lot of times that's not enough hours to be considered a full-time student, so your schedule gets padded out with general ed courses. It wasn't until my fourth semester that I was done with the general ed/background courses and started taking nothing but courses in my major. That's not because we had some massive general ed requirement, that's just how long it took me to get through the chain of prerequisites.
Contrast this with one of my roommates. I don't recall exactly what his major was, but his degree program included a large number of courses with few or no prerequisites. Starting from his first semester, he was taking almost exclusively courses from his major with maybe 1 general ed course per semester sprinkled in occasionally. He intentionally chose gen-ed courses completely unrelated to his major as he felt it helped him stay creative and mentally flexible, but that was his choice. His weren't front-loaded like mine were, simply due to the way our respective course lists were structured. You'll see very different experiences between different schools and different degree programs, and sometimes even between different students in the same program.
Upvotes: 2 <issue_comment>username_4: A few points:
* Market forces don't really apply here because schools that want to be [accredited](https://en.wikipedia.org/wiki/Higher_education_accreditation_in_the_United_States) (which is pretty much all of them since students can't use Government-sponsored loans at unaccredited schools) are forced to require 120 credits for a bachelor's degree. 120 credits is about 40 classes. So there is no option of graduating faster by skipping the "unnecessary" classes.
* Some schools like [Brown](https://www.brown.edu/academics/undergraduate/open-curriculum) famously have an "open curriculum," allowing students to count to 120 however they want. But in practice, most students who attend schools like Brown are there precisely because they want to take many different subjects, not because they want to load up on a single subject.
* Unlike in certain other countries (e.g., France), US students do not specialize so much during high school. So, most students have taken only the introductory classes in each subject, and few understand the differences between similar subjects (e.g., chemistry / biochemistry / chemical engineering) or the career options that each subject might provide. So, many first-year students are not ready to declare a major and appreciate having an extra year or so to decide.
* As others have noted, the general philosophy is that university is not trade school; rather, university graduates should be well-rounded scholars. Certainly, it would be nice if all of our professionals had excellent reading/writing/speaking skills, could think logically, solve problems, use technology, and had some understanding of global culture, history, and a foreign language or two. Whether most colleges' general education requirements really achieve this goal seems doubtful to me.
Upvotes: 3 <issue_comment>username_5: On top of all the explanations given, there is powerful financial incentive to do so. Why let an enrolled student leave in 2 years when you can force them to stay for 4?
As part of that financial incentive, universities pander to students. Some students are much more likely to enroll and pay for a class in Klingon language than a class in business statistics. My experience shows significant detours from practical knowledge in higher education. Universities do not take responsibility for their offerings in this context. Its not their job to prepare students. Its their job to attract students and money. Thats how the gears turn.
You can argue all you want about the benefits and the dedication, as long as you mention the fact that university gets their money.
I would also point out that American universities are notorious for not adequately preparing students for the workforce, which erodes some of the more --shall we say sympathetic-- justifications.
Upvotes: 3
|
2022/12/08
| 458
| 1,836
|
<issue_start>username_0: I am currently writing my mathematics master's thesis (in topology) in my native language, German. Most of the literature is in English and some of the words are hard to translate. Is it ok to mix both languages?
For example, the German term for "adjunction space" would be *Verklebungsraum*, but no German mathematicians use that either.<issue_comment>username_1: In the end, it's a matter your thesis style guidelines or your advisor should know.
That being said, in my (German-language) Diplom and PhD theses, I did use English technical terms, sometimes in addition to the German translation. I was told to mark foreign language terms in italics.
Upvotes: 2 <issue_comment>username_2: As a question of style, it is not fine to mix two languages. If you introduce a technical term, you are more than encouraged to put the English term in parenthesis, maybe with a quote:
>
> Potenzreihen (Power Series [Cauchy87]) have the form ...
>
>
>
There is still Mathematics published in German, though less and less because authors actually want to be read, and there are still textbooks in German, which you can consult for the German expression for a mathematical concept. If you cannot find one, you get to translate yourself, trying to be as literal as possible. So adjunction space would become Adjunktionsraum, if (contrafactually) there would not be a German expression for it.
Ultimately of course, it is the guidelines of your university and / or of your advisor(s) that matter.
Upvotes: 2 <issue_comment>username_3: Follow these points:
1. Ask your adviser
2. Check if there any University guidelines
3. Check previous (German) theses in your field from the same
university
4. Check standard textbooks in your field written in German
(no one here will be able to answer your question)
Upvotes: 2
|
2022/12/08
| 3,936
| 16,476
|
<issue_start>username_0: We are towards the end of a graduate level course of about 20 students, and its time for course evaluation. The instructor is fairly new (just graduated from Ph.D a year ago) and is not a native English speaker. Overall the instructor is knowledgeable on the subject. However, the style of delivery is very monotonous and often sleep inducing. Instructor just reads off from slides, almost like a news reader, only with even less modulation. Sometimes the thick accent itself seems to require additional effort to listen more. However, comprehension has never been an issue. Also, instructor is in general a good person, very accommodating to missed lectures and wants students to succeed.
I am a bit conflicted if I should give an honest feedback about how the lectures were so monotonous, or just a cliche'd positive one. On the one hand, my reasoning is - if I do not give an honest feedback, the instructor would not realize to change the style of teaching. However on the other hand, I do not want to make a rather young and new faculty feel bad, since we can't say for sure how people react to negative feedback. I also think that the monotony is something other students also felt, so even if I don't give an honest feedback, someone else will. Some advice on what should be the right perspective would be helpful.<issue_comment>username_1: Getting honest, but not malicious feedback is invaluble when you are just starting with teaching. Most feedback we receive is either of the bland, cliched, positive nature you describe, or "it was crap".
Being able to give honest constructive feedback in a way that does not offend the reciever is a difficult but incredibly valuble skill.
One way to start might be the "sh\*t sandwich". Structure your feedback in the following way:
1. Something you liked.
2. Something you didn't like.
3. One concrete suggestion for something the reciever should focus on for maximum improvement.
Upvotes: 5 <issue_comment>username_2: You can be honest, but not cruel. So "brutally" is, perhaps a bit too strong.
Note, of course, that new instructors are often fairly poor at it, learning on the job as they are.
If you think the person has potential, even if not realized, be aware that it might not be best to say things that make their career harder. In a small group (20 students) it might be an option to speak with them rather than make the evaluation overly negative.
New instructors (myself IIRC, though it was long ago) can fail in many ways. I was overly pedantic and only learned (if slowly) that such things didn't work.
Also, as a grad student, don't get into the habit of depending overly on the quality of lectures. There are other ways to learn, including asking a lot of questions, work groups, exercises, outside readings, etc. If you were successful at learning, then the flaws of the instructor are less important than if they were an actual impediment. Painful, yes, I've seen that too.
Upvotes: 3 <issue_comment>username_3: Monotonically reading off slides is a poor presentation technique in any area - whether lecturing or presenting a research talk or reading the daily news on TV/radio.
You can give *honest* feedback without being *brutal*. I'd focus on things like the presentation style rather than the accent, as the latter is mostly an accident of where the instructor grew up (though they may also be inter-related - an instructor who feels uncomfortable lecturing in English might defer to reading text verbatim, regardless of their actual English proficiency). Generally, feedback is best when it's *constructive*: suggest what you'd like the instructor to do differently, rather than just what they did wrong, e.g.: "I think the lectures would be more engaging and easier to follow if the instructor spoke conversationally about the material rather than reading directly from the slides."
It's also appropriate to mix positive and negative feedback. Sometimes this is suggested as more of a "management tool" to convey areas for improvement without offending the target, but it's also appropriate when it's fully honest and natural - if you appreciated the instructor's flexibility or willingness to spend extra time to help students succeed, point those out too.
Upvotes: 5 <issue_comment>username_4: I agree with the answers that all (at the moment) say yes to honest but not brutal feedback. You could almost copy you question here to the feedback form, complete with your hesitation.
Upvotes: 2 <issue_comment>username_5: Do you like to hear "brutally honest" feedback on stuff you didn't do well? Or at least ready to? If so, you are entitled to do so.
I do not know you, of course, but my experience is that people that are quite generous with brutally honest criticism are not quite so gracious in accepting similarly brutally honest criticism. Maybe that's not true for you, I do not know.
But in general, criticism works best if it shows improvement opportunities in the process and action, and it's clearly not referring to the person. If the person comes from non-western cultures, one needs additionally some cultural sensitivity to convey the message in a way that will help improve things without causing the person to either shut down or be significantly hurt in their self-esteem.
Upvotes: 3 <issue_comment>username_6: First, students' course evaluations are notoriously biased. Faculty who do not look like white men consistently receive worse evaluations from students due to bias.
The age and experience level of the instructor should ***not*** be part of your evaluation.
Be honest, but keep it constructive. Do mention the instructor just reads off the slides. That is something the instructor can change (though, depending on circumstances, perhaps not easily).
The typical instructor is not going to be able to learn a new accent (assuming this is not an acting class). Mention the instructor's accent only with great caution. If they have an accent with a low social status, this can bias student evaluations. For example, if they speak Black Vernacular English or Appalachian English, that might be causing bias. Some students might find faculty with a low-status accent to be excellent role models.
Where possible, evaluate the course, not the person teaching it.
Upvotes: 0 <issue_comment>username_7: You should **give honest and constructive feedback**. Try to avoid "brutality" in course feedback, and instead aim to report the shortcomings clearly, in a calm and measured fashion. Go through each of the problems you observed and report clearly what went wrong and how it detracted from your learning or participation in the course. If the course was very poor in quality, you could consider giving a shortened version that focuses on the top three problems, to avoid overloading the instructor with problems.
Upvotes: 4 <issue_comment>username_8: Instead of a generic unactionable criticism such as:
>
> These lectures were so monotonous that they encouraged me to write a will, just in case one of them bored me to death.
>
>
>
One might give concrete ideas for improvement:
>
> To make the lectures more engaging for students, it might be good to have:
>
>
> * Interactivity, e.g. frequent short questions/quizzes that the student first tries, and then the teacher demonstrates the correct method.
> * Frequent "breaks" with discussions, real-world examples, videos, ...
> * A lively tone of voice, with clear diction and separation between words. (Might help with the accent.)
> * [Other relevant strategies.](https://www.google.com/search?q=engaging+teaching+strategies)
>
>
>
Upvotes: 3 <issue_comment>username_9: I think feedback should be honest but respectful and courteous. You should offer points to improve, but there is no need to be a jerk about it.
It's the worst thing when you have an awful lecturer who has been teaching for twenty years and never improved because no-one gave them honest feedback. If they need to improve, then you have to tell them, as they won't do it otherwise.
Also, don't just assume that other students will give the feedback which is on your mind, as it may be that they are thinking the same.
Upvotes: 2 <issue_comment>username_10: I agree with your own caution on being too martially critical as this will make the professor *personally* defensive and thus reluctant to change.
While - in theory - suggesting that the professor observe (presumably exemplary) Professors Mandy and Jim in the same department *might* work for *some* staff, it is more likely to backfire as the external behavior view is so different to the internal motivation feeling. And there is also the matter of professional pride: nobody wants to be anyone else's clone.
From your information, I think the main problem is the monotone and how it affects the allocation of attention-focus by students - who are likely writing notes concurrently: the professor is not changing his tone to match the importance of the immediate point.
I think the solution to this might involve something like what is used in "labs" for foreign language learning, e.g. something that plays a dialogue on some ordinary scenario between native speakers that the learner may listen to, repeat, hear his own recording and repeat till he adjudges his pronunciations adequate. Of course, here the topic is not an encounter between a travelling salesman and a train conductor: it's about an advanced field of study. And likely no examples exist for that field. However some public lectures, e.g. Royal Society Lectures or Open University course lectures (see [YouTube](https://www.youtube.com) or [the OU website](https://www.open.ac.uk/)), exist where the speakers have been given a reasonable amount of on-camera training.
But before the latter could be of benefit, the categorization and hence objectivization (i.e. de-personalization) of this problem needs to be done and it is best that some members of your class approach your university's Teaching Support Unit (or whatever it's called over there) and bring the professionals there onboard with you. If their terminology for this issue is for example "tone modulation", then refer to this term in your critique to the professor.
But the good thing in all this is that the professor's own goodwill to want to teach you his subject and guide you in your researches. So many bad teaching staff simply do not have this - and this apathy is palpable and itself sows apathy in the souls of students.
I think this guy will come through if supported by colleagues and the university's teaching support bureau.
Upvotes: 0 <issue_comment>username_11: I depends what this implies.
In countries where the **impact is zero** (most European universities for instance), when there is a feedback system it is either
* to make the school more modern and whatnot, and it goes to the bin
* to create some statistics for advertisement
* an idea of the instructor themselves to get useful feedback
I was in the latter case, I wanted students to tell me what they liked and what they did not. Some of the feedback was not useful (either a nice gesture via a "great!" or a less nice one via "it was crap"), some were expected but not useable ("the curriculum is too complicated", "the math is too complicated" → these I would take to the dean as inputs for possible (though unlikely) general changes), and finally, some was pure gold (what they found was good **because...**, what they found was bad **because...**).
I always encouraged the students to be frank and make the *because...* part as detailed as possible. It was anonymous, but someone could put their name if they were fine with following up (which happened a few times and was great).
In countries where **there is impact** (my understanding is that the career of teachers in at least to some point the US is dependent on this feedback), you need to use your ethics.
If you think that the teacher was competent, you may want to give good "grades", but highlight in the "comments" section that their writing was not very understandable, or that the course would gain with more excitement from their part or whatever.
If the teacher is bad (which is subjective, but some are really bad) then you have a chance to influence how others will be educated.
Upvotes: 0 <issue_comment>username_12: Constructive criticism is often very much appreciated. Give feedback based on the presentation of work and make the person understand that what you are addressing is their style of delivery and not the person directly or their personality.
You often get people who say things like, "After you pointed out my robotic/monotonous voice, I have significantly improved my mode of teaching." Try to give honest feedback at all times but use words wisely to differentiate between the person and their delivery of work.
Upvotes: 1 <issue_comment>username_13: Even if honesty is always good in the face of a well-meaning, non-prejudiced person that knows and has the confidence to know that you mean well and are not prejudiced too…
You don’t have to worry about honesty. It’s not about honesty. It’s about **usefulness**.
Clearly, the current situation is harmful to you, and the instructor causes that, probably both without intention or being aware of it. It may be the deciding factor to the avalanche that decides your future, after all.
So how about *you* not getting hurt either?
You can ask yourself: **What is the most useful? To the instructor. To you.**
Both directly, in terms of plain information given, as well as in terms of what the person can actually accept.
For example: A suggestion may be very useful. And a mental trigger or other mental problem does not change that, nor give anyone the right to blame you for their problems… But a trigger will prevent the person from accepting otherwise good information, and you cannot quickly fix that with InstaTherapy™. So, as you hinted at, you still want to communicate it in a way that can be accepted even with that. Because your goal is to actually get it across.
But actually, it’s really not your job, but theirs. (The instructor is paid too, no? So where’s your money’s worth?) But that is of no use to help you right there, right then. You need to actually get it across!
I don’t know how the culture in your country is, but I know about the stark difference in this, between my country, Germany, and e.g. the USA, and what effects it has. …
Here in Germany, frankness is the standard modus operandi by everyone, and you either grow some confidence and stability (what we consider growing up) or die trying. But what looks offensive to US-Americans is only superficial, as we are well-meaning, usually want things to be better, and don’t even think about putting somebody down (unless we are clearly angry). …
By contrast, in the US, everyone is always nice, which is of course *nice*, and everything is wrapped is soft cushions and cotton candy. But in my experience, this is often only superficial, and behind the scenes it might be “basically war”, and criticism is very often seen as purposefully hurtful, so everyone either becomes very careful in how they express themselves, or dies trying. (In its most extreme case, it enables modern people to bully by becoming “offended”.)
US-Americans probably find our culture as bewildering as we find theirs.
So of course, *the approach depends on the culture of who you’re dealing with, and cannot be answered fully in general for a specific person.* (In short: Empathy, for them as well as you, and getting to know the person as well as you makes things easy.)
In the end, **I wouldn’t shy away from giving *any* criticism that will improve things, but *ALSO* rather focus on expressing it in the *right* way.** No *Either-Or*. but an *And*!
That way you get the best of both worlds. It takes empathy and brain power of course. But you literally are currently doing exercises for your brain power, so… :)
Upvotes: 0 <issue_comment>username_14: What do you want to achieve? Do you want to let the instructor know that his/her accent makes it hard to follow the class? Do you want him to improve? Ask yourself what is your goal and then what kind feedback helps you arrive at this goal. If your goal had been to make it easier to follow the class, why didn't you bring up your issues already during the semester? Critique for the sake of critique is a waste of everyone's time..
Upvotes: 0
|
2022/12/08
| 716
| 3,128
|
<issue_start>username_0: I recently found a Computer Science BA degree program that requires, as part of the program requirements, students to get a minor in a second department. They say this is done to provide the student with breadth and depth apart from their BS program. Is this arrangement of "Just get any other minor" a common or uncommon in CS or other STEM departments? Is it beneficial for the students versus teaching additional humanities-oriented courses in the CS department?
If anyone knows of examples of such programs a comment would be appreciated.<issue_comment>username_1: It is a fairly uncommon practice that I would expect to see at small liberal arts colleges. Such a program does make sense. Many computing related jobs benefit from such capabilities as good writing (which one can pick up by studying history or theology for example) or some background knowledge in an application area such as music or art for people interested in certain aspects of game development.
The college can then offer a reduced computing program (as compared to ABET or ACM core requirements) while still allow their graduates to transition easily in a role in computing.
Prescribing the minor would not work well. One of the reasons to get a liberal arts degree is to be a good all-rounder.
Students from an ABET accredited program should be able to code around these liberal arts students (though that is debatable), but will not do so well in jobs that are not exclusively engineering oriented.
Finally, the experience gathered on the job becomes a larger and larger part of a person's capabilities so that the importance of the college experience slowly recedes over the years.
On a personal note, many, many years ago, I was at St. Louis University, which at that point (how times have changed) had a very rudimentary CS degree in Mathematics, but their graduates were picked up by <NAME> and similar places, not because of their engineering skills, but because they could easily learn to do the software engineering jobs (basically application programming) while being able to write such things as manuals and communicate well with the users of their programs. Apparently, they found it more difficult to teach these soft skills to graduates from more computer centric programs.
Upvotes: 3 <issue_comment>username_2: A degree program is basically a list of requirements to fulfill, primarily a list of courses. For example, you must complete courses A, B, C, D, E, F, X, Y, Z to get the degree.
Requiring a minor is no different than saying you must complete A, B, C, D, E, F, M, where M = X, Y, Z. A program may give you some flexibility on what courses fulfill X, Y, and Z; if they choose to organize X, Y, Z into various versions of "M", that's really no different.
An institution can group and require their courses however they'd like, presumably they're trying to either make their program attractive to students directly or make their graduates attractive to people who might want to employ or train them further (and therefore make their program attractive to students indirectly).
Upvotes: 2
|
2022/12/09
| 6,271
| 27,348
|
<issue_start>username_0: On the one hand, weaker students benefit greatly from studying with good students, and it's helpful for the good student to be able to explain a concept to weaker students as well. On the other hand, good students benefit from studying with other good students ([example](https://en.wikipedia.org/wiki/Tracking_(education)#Higher_achievement_of_high-ability_students)), and it's why universities strive to assemble full classes of good students.
In a class with both good students and weaker ones, should the teacher encourage good students to help weaker ones (e.g. assign them to the same group for group work)?<issue_comment>username_1: As a student, I am against this. Everything should stop at "encouragement" only. Please do not intervene as there are unintended consequences (on students).
Other people's understanding of things is their responsibility, not mine. I occasionally help "weaker" students by providing tutorial sessions. However, I only do it if it is somewhat beneficial for me (e.g. helping me reinforce my knowledge), or somehow I find a moral obligation to help them. That is my voluntary decision.
There is a high chance that by purposedly assigning weak students into groups with good students, some "weak" students will not do anything, and eventually they get the same score (fair enough?) or they somehow affect the score of others by non-participation.
I also believe that we have to learn from our own mistakes to become a better person. To help some people, maybe it is best to let them fail a few times.
Upvotes: 7 [selected_answer]<issue_comment>username_2: I think it is important to make a distinction between "weak" students that are weak because they have a harder time understanding the subject (for what ever reasons, e.g. there are people with the intellectual capacity to understand things straight away and there are those who need to read through the material several times, get additional support material and talk it through before really understanding) but are hard working and willing to get better. On the other hand, there are "weak" students that are lazy and just try to get through with minimal effort, no matter the grade.
Just as edelweiss said, assigning lazy weak students to a group of "stronger" students will probably do not lead to any benefit for anyone. On the contrary, the lazy weak students might do even less than before, as they have the strong students pulling them along. The strong students will probably end up doing more work (that the lazy weak student isn't doing) as they want to get good grades.
For the willing weak, a concept like you suggest might be beneficial, but it still will take a toll on the stronger students who now have to spend extra time explaining things they already know to others.
From my own experience, the best group work came out of groups that we were able to form ourselves with people that we knew had a similar work ethic. Forcing people to work together is almost never a good idea.
Upvotes: 5 <issue_comment>username_3: There are a few questions layered here:
* can (some) weak students benefit from help from stronger students?
* is this beneficial for the stronger students, and if not, is it justified for the benefit it offers to the weaker students?
* should the teacher actively/explicitly encourage this, and if so, how?
* should the teacher passively encourage this, for example by designing work groups with a specific aim to mix different levels of ability?
Sadly the answer to every one of this questions is "it depends". Some strong students are used to working on their own and will just barrel through group projects as if they were solo projects, perhaps increasing the group's grade but pretty much obliterating everyone else's learning experience. Others really thrive in explaining things to others and, as username_10 comments, really cement their own knowledge in the process. Some weak students are just struggling with the pace and/or style of explanation, and can get a lot out of going over the same concepts again with a peer. Others are just out of their depth, disengaged, or overwhelmed.
More than group projects, I personally found that self-organised study groups were a good way to bring together people who find this type of interaction beneficial (I was one of those students who learns by explaining to others, and I like to think I managed to help some of my fellow students this way), but they're not something you can have a great influence over as an instructor. They can also be hard to access for students who are shy or just less interested in socialising.
I'd be interested in hearing people's experiences of actively influencing group composition in group projects and whether that had and discernible effect.
Upvotes: 4 <issue_comment>username_4: In higher education and in general circumstance, no. Naturally there may be exceptions, but my answer is a hard no.
Edit. You might create an environment where that is possible, but you cant actually direct that encouragingly.
All the students are of employment age. The stronger students are under no obligation to "pick up your slack" as a paid educator.
In addition, by pursuing this course of action, you explicitly set different expectations for different students.
Most people have strong opinions about group work. Those opinions tend to be less than positive.
Me personally, im in that class to learn. Not help some stranger. his problems are his problems. Not mine. I certainly didnt sign up for thermodynamics to help someone with english as a second language. Sorry. Ive got my own problems.
Smaller children or educational classes or humanities that answer might soften of course, but that would be outlined inbthe syllabus. Not a problem solving opportunity.
Upvotes: 4 <issue_comment>username_5: As a student my simple answer is *No*. Nothing should be forced; you might make it, say, a voluntary activity, but then don't give out credits for it or seem to favour students who participate in the activity. I help out my peers and they help me, plain and simple—but I do that at my discretion and with my friends. We're all adults who have other responsibilities beyond our courses, and if your assignment takes up my time beyond the course-assigned time commitment then it is unfair for me.
Secondly I strongly resonate with some of the other answers, especially the examples where students who are not actually "weak" but just lazy get carried by the other team mates. In my first semester of college, since most people were not acquainted with others one of the courses allocated groups randomly. I vividly remember in our group of five one member rarely showed up to meetings. But really no one wants to go complain to the instructor and waste their time; instead they just pass along the other person's name as well.
Finally I see the same kind of comment on every post and was planning to respond but [username_2 -they-](https://academia.stackexchange.com/users/133549/sursula-they) does a great job [responding to one of those](https://academia.stackexchange.com/questions/191398/should-teachers-encourage-good-students-to-help-weaker-ones/191418#comment516716_191402) and I would recommend anyone planning to comment please read that before making the same point.
Upvotes: 2 <issue_comment>username_6: You have a responsibility as a teacher to make sure everything you do for a student is for their benefit; so if you do this, you need to do it for a pedagogical reason for *that* student, not just for the weaker students' benefit. I'd argue that the test is: would you have *every* student take on the role of teacher/mentor? If not, then it's not being done for that reason.
At younger levels (say, elementary school), there is often quite a strong pedagogical reason for having students teach other students: teaching them *how to teach*, or more generally how to communicate with others. It also reinforces the material, and makes sure they know all of the material and aren't just able to shortcut to the answer.
However, the key that's present there is that you don't only have the *stronger* students teach, but you have *all* students teach each other. Look at a Montessori classroom for example - every student teaches another as part of what they do. There's always younger students for older students to help teach, no matter their individual ability.
At the university level, classes are specialized and expected to focus on one specific thing. You don't go to Calculus 101 and expect to learn how to teach! As such, it's much harder to have a legitimate pedagogical reason for involving other students. Yes, if it is something that helps reinforce the material, then it's worth doing perhaps; but again, if you're doing it *only for the stronger students*, it's probably not for that reason - the weaker students need that reinforcement *more*.
Upvotes: 3 <issue_comment>username_7: This answer might not apply to some fields, but it does in something like CS where group work is valued and needed in both academia and the workplace. Your field seems to be physics, with which I have no experience, though I think that lab work is often (typically?) done in groups, not by individuals. I'm not so convinced it would apply in math, however. I think it would in true math research, though, and mathematicians are collaborating more widely in recent years than in the long-ago past.
I would, when still teaching, make group work a fundamental part of the course and of the grading. Projects were typically about 70% of the grade with most of that group projects. The courses were pretty technical, say database theory or compiler construction.
I used two strategies for forming groups; self selected and random. But, I also learned that I had to teach students proper behavior in group work. It wasn't appropriate for the "top" student to just do all the work and carry the others. It wasn't appropriate for anyone to slack. But this doesn't come naturally to them. They need instruction about group meetings, consensus, working together rather than just trying to divide the work and then, in an extra, difficult, task, integrate the parts into a whole. Pairing on all tasks was encouraged and even demonstrated in classroom situations. I'd even do this with some grad students. I also used peer evaluation (not peer grading) so that students could give me information about who participated and contributed and what those contributions were.
The effect of self selected groups was, I think, less beneficial to them. The "hot shots" could get together and mostly coast to a decent grade, learning less than they might have done. The "weak kittens" might be lost without external direction, though I had one massively successful experience with two "weak" students working together and outperforming the hot-shots, though they basically lived in my office for the semester asking and reasking the same questions. Finally, they got it and did well, explaining complex things to me by the end of the term. Not a common experience, but it can happen.
My experience with random selection was generally better, though instruction in group behavior was needed. It isn't part of the gene pool. A random group has, usually, some stronger and some weaker students, but if they work together all can benefit. One key, however, is that not everyone has the same skills and not everyone needs to contribute to the group in the same way. Suppose a ten person group has to write a ten page report. Lets divide the work so that everyone writes one page? How about everyone write every tenth line? Ain't gonna work so gud.
But some students can focus on the library searching, some on the coding, some on the writeup, one or two on the actual management of the group and making sure that the parts will fit together. Everyone contributes something valuable; equal if not the same.
I had another success in such a random group. There was one student who seemed to be a bit dull in class and I worried about him. He was put into a group of mostly "stronger" (my guess) students. When I asked them for for peer evaluations at the end of the course, this "dull" student was the one they all described as the main contributor to the project since he was able to keep everyone else "doing the right thing". Everyone in that group was happy with the result as was I.
I've also had the situation where students who were in a long lasting program and knew they were going to be part of several projects with the same team, would use a "pay it forward" technique in which some of the students would do the bulk of the work (together) on the current project and let one of their peer "off the hook" to deal with other things, even family emergencies. That student would then have extra responsibility to contribute to the next round or project. In effect, they built a collaborative community in which they helped other in the group learn the important lessons. They were mutually supportive but it wasn't that the hot-shots took over and left others out.
---
Detail on "peer evaluation". In a group of five ask two questions of each student, answers submitted privately but not anonymously.
1. Who were the three top contributors to the group and describe why you have listed them?
2. What was your own main contribution to the group?
These were seldom used to affect the grades, though they might. In the case above, the student mentioned got a boost. But a person not mentioned at all by anyone as a "main contributor" won't share in any bonuses and might be singled out for some intervention. But students didn't actually "grade" one another, just give positive comments when needed. A person refusing to answer the first question is just as much a problem as a student never mentioned as a contributor.
In a group of two, the first question would need to be something like "What was your partner's main contribution?"
Upvotes: 3 <issue_comment>username_7: This answer is independent of my other one here and may be more general, perhaps applying where group work isn't the norm.
Yes, I believe that students should help one another within bounds. Like the other answer, I think that they also need to be taught how to do this effectively so that both can benefit. This one is more applicable to learning things like math and other fields. Note that I didn't cast it as the "strong" helping the "weak" but as students helping one another.
First, this requires the permission of the instructor and some guidance. In fields like math, insight is hard won. Reading an answer to an exercise isn't the same and hasn't the same educational effect as doing the exercise yourself.
When a student asks the professor a question, the wise instructor will try to give the minimal guidance needed to get the student moving again, rather than providing a full answer. Asking questions when questions are asked can be effective. "Have you looked at ...?" as a response to "How do I ...?" for example. But, students don't naturally know how to do this. They need some instruction in it.
But any student will benefit from having to explain a difficult concept to others. ["Teach it to the rubber ducky"](https://en.wikipedia.org/wiki/Rubber_duck_debugging) is a common pattern, actually. As an undergrad we were sometimes assigned to give a lecture to the class on a small math concept. We then fielded questions. The instructor would later give feedback. Anyone who wants a career in academia will benefit from occasionally working with others on difficult concepts.
Mathematicians in large departments often have seminars that meet weekly with a few professors and a few grad students. Everyone can mention what they are working on and any blocks they currently have. Anyone can make suggestions. They do the same at conferences in a larger realm.
Students in law schools often do the same thing. The movie [*The Paper Chase*](https://en.wikipedia.org/wiki/The_Paper_Chase_(film)) describes this, sometimes painful, process.
But, the good students can't just do the work of the not as good ones, or no one benefits. Learning needs to take place, but it should be learning on both sides.
So, I'd suggest encouraging students to ask one another questions about the current material, but discourage them from revealing exercise hints (discourage = forbid). If your scale is small enough and you recognize those students with the most potential, you can, perhaps guide them in the art of guidance. Mass education in huge groups makes this difficult/impossible, I'd guess.
Students, strong and weak, need a place to get appropriate assistance. Other students can be part of that. But not without some sensible bounds.
As username_10 says in a comment, one can learn a lot by teaching.
Upvotes: 2 <issue_comment>username_8: A lot hinges on how you're defining "good" and "weak" students. Is this based solely on their performance in your course(s)? What constitutes "performance"— course grade, level of class participation, an general assessment of some other set of skills, etc.? Is their categorization as good/weak based on their GPA or some other metric beyond your course alone?
An equally important consideration is what sort of group work you're thinking of creating these groups for. Is this a long-term group assignment, or just quick breakout groups during a short in-class exercise?
Finally, what form would your "encouragement" take? Would it be assigning groups with mixes of "good" and "weak" students? Or would you be somehow identifying the "good" students and the "weak" students and encouraging them in some way to pair up and work together on some assigned task?
---
The crucial thing to my mind is that, if you decide to proceed with the good/weak pairings, you refrain from identifying *to your students* which group they fall into. This would be, *at best*, potentially embarrassing to students (of either group).
That being said, if the pairing of "good" and "weak" students is for something like short-term, in-class exercises, then the stakes are low enough that any potential benefit would come with little risk (e.g. freeloading, stigmatization, inaccurate assessment as to who's "good" vs. "weak").
If your pairing is for longer-term assignments, there is the risk, as others have noted, that the students perceived as "weak" are simply not interested in doing more than the bare minimum, which would put more work on the shoulders of the "good" students, in effect punishing them for their better-than-average performance. Then again, no matter how groups are formed for long-term group assignments, there are almost always students who don't pull their weight, so this isn't really a downside of your proposed method of grouping students.
Most important to your proposal, though, is the criteria. If these are students you've worked with mainly or exclusively for one particular course, it seems very difficult to be certain of the accuracy of your assessment as to whom is "good" and whom is "weak". Could it be that the "weak" students just aren't engaged in the material for whatever reason? Or that undergrads these days are encouraged to take staggering course loads and your course falls at a time in their schedule that leaves them the least amount of time for the assignments? Or they're taking care of a sick parent/grandparent/sibling/child/partner? It's commendable to want to enact a strategy that will help the most students possible be successful in your course and elsewhere, but at the same time cases like this present so many unknown unknowns that erring on the side of restraint may be the best strategy.
To that end, a brief anecdote: a senior faculty member once reminded me that, as people who have chosen to progress through the highest levels of the educational system, we professors are likely to have been students who, at all levels, strove to excel (some might even say overachieve) in our coursework, for whatever reasons—but at the very least because we placed high valued on academic achievement in its own right (in particular achievements assessed through letter grades or otherwise highly prized in the very environments in which we now work). But not all our students have the same experience. Some may be interested in doing the bare minimum to pass their classes and get their undergraduate degrees as a means to increasing their earning power in the workforce. In this case, a student's particular performance in any given course might be of little consequence to them—but this doesn't make them "weak" students, it just means their priorities lie elsewhere. I teach at a school where many students are first-generation immigrants and the first in their families to attend college. It took me some time to get used to the fact that some of them aren't remotely interested in the nuances of the subject matter, they're trying to pass their classes because a degree means they can be a manager rather than an entry-level employee at, say, Target.\* For some students, education is means to a material end, not necessarily something valuable in and of itself. And this is a perfectly valid reason to be a student. I relate this because I've found it useful to keep in mind as I continually attempt to contextualize what my potential role is, and how my courses and teaching methods fit, within the broader context of my students as individual, competent adult people navigating invariably complex lives in which, for the vast majority of students, my courses and I inevitably play only a tiny part.
\*A specific example conveyed to me by a student.
Upvotes: 1 <issue_comment>username_9: It depends - both on the level and on the context.
Naturally, it has to be done in a way where "everyone wins".
I don't think it is ever a good idea to teach fast and slow students together if the content of one lesson depends on that of the previous. The faster students get bored and/or the slow ones fall y the wayside. However, it can be OK where you can usefully pick up part of the lesson).
What can be useful is for the fastest students to duck out of a proportion of their group's repetition and to coach slower - and/or younger - students from other groups. The fastest students have to learn to think in different ways, and (done right) the slow/young ones benefit from the guidance towards approaches that work.
Of course in a high-school context this requires significant teacher suppoert, so doesn't save much (if any) teacher time. However, it improves cohesion between the student groups, and it produces better-rounded students. I feel that it also helps apparently less able students to achieve more of their potential, but I have not seen evidence for this.
At a higher level, many of the great universities use PhD students to give tutorials to small groups of undergraduates. The PhDs maintain their breadth and (sometimes) challenges\*, the undergrads at minimum get a different perspective. The proof of this pudding is (as they say) in the eating.
\*To say nothing of a useful income supplement
Upvotes: 0 <issue_comment>username_10: I wasn't going to post this as an answer, rather leaving it as an underdeveloped comment. But I was persuaded that perhaps it should be an answer.
Ultimately this question is actually three questions:
1. Do students struggling with a specific concept benefit from help from students who do understand.?
2. What is the cost or benefit to the student doing the helping?
3. Under what circumstances should we, as educators encourage or mandate this?
Lots of the answers focus on two, and that is what my original comment focused on:
The old adage is that you don't really understand something until you teach it to someone else. Helping others understand something helps you as it increases your understanding. Note this is not the same as doing others' work for them. In the world of work you will inevitably work in teams, and in those teams some will be stronger than others. Learning how to navigate this: how to make sure the team performs (as you will be judged on team performance), without doing everyone's job for them is an important skill to learn.
It was been commented that it isn't always the case that helping others understand increases your own understanding. I have to say, I'm skeptical (whether its the only/most efficient is a different matter). Someone commented that its better just to work through problems, but I think you can often use a concept to solve a problem with only a surface understanding, but to teach someone else who is struggling, you need to genuinely [grok](https://en.wikipedia.org/wiki/Grok) the idea in its fullest.
Take standard error of the mean as an example. One can easily calculate SEM by following a simple formula. One can use SEM to make judgements easily without really understanding what it is by following a set of rules. But to really understand SEM, and the difference between SEM and SD, one needs to understand all sorts of things about the central limit theorem, the connection between distributions of random variables and the distributions of the statistics of samples of those distributions etc. And not just parrot definitions, but be able to come up with the definitions (and novel wordings of the definitions, novel metaphors for them etc) yourself, *ab initio*.
Now, given how well you have to understand something to teach it effectively to someone else, we have to ask if it is likely a "weaker" student will benefit from the teaching attempts of the "stronger" student (we might ask the same of the average professor as well).
The same goes for group work. As I pointed out, all of us need to work in groups from time to time. Even if you are to be an academic mathematician, you will need to work with others on administrative committees etc. Being able to work with others who may be "weaker" or "lazier" this way without just doing the work for them is a tricky skill, but important skills. But just putting people in groups and telling them to do group work is not teaching the skill. It should be an express learning outcome, recorded in the syllabus/curriculum (it is a stated learning outcome of all our degree programs, but not all our modules - students most have demonstrated it one way or another by the end of their degree, but not in every course). Just shoving people into groups may lead to the stronger student just doing all the work, because that is often the lowest effort way to get a good grade: leading and encouraging and mentoring the others is more work.
Upvotes: 2 <issue_comment>username_11: I have taken many Mathematics and Physics classes. Sometimes there were things I struggled to understand or had a mental block on and other students helped me, probably benefitting in the process. Sometimes other people struggled with things and I helped them. Explaining concepts helped me to understand them as well.
In an English language class, would it be wrong for the students who speak better English to help the students who speak worse English?
Would you regard it as somewhat repugnant and elitist to call the ones with better English ''strong'' and the ones with worse English ''weak'' when everyone is there to improve and try to get better?
Upvotes: 1
|
2022/12/09
| 3,275
| 14,500
|
<issue_start>username_0: Can you explain how course schedules are structured in the United States? Is there a pre-set timetable? Do people generally attend classes with the same people? Do most people graduate on time?
For example, in Russia we have cohorts of students who attend the same degree program. Everyone in that cohort has the same or nearly the same timetable for the whole 4-year period. Those people attend the same classes, which are mostly preselected, together and get to know each other. If you fail a class you either have to redo the exam or, failing that, retake the whole year.
In the US, after being accepted, my understanding is that something like this happens:
Every student chooses their own classes based on the gen-ed requirements and the classes that are required for their major. The classes are structured like a skill tree with more advanced classes requiring other classes to be taken, but every student can take and retake classes in their own time and are not required to "keep up" with anyone else. Two people graduating with the same major may end up having attended a very different set of classes at different times.
If so, how is a university able to create a unique timetable for every student? Wouldn't constructing a timetable so that every student could take their desired classes be a nightmare? If everyone is taking different classes in different order, would it mean that many people are unable to complete their degree in 4 years?<issue_comment>username_1: Students arrive at college with different levels of understanding about what they want to study. I knew from the first it would be mathematics. My daughter only chose her major study (philosophy) after three years of study, having studied many things already. The latter case is unusual and most choose initially or after about one year.
In many, perhaps most universities, every student is at some point chooses or is assigned a faculty member as an advisor who meets with the student to plan for the coming terms, according to the interests of the student at the time. In large universities the early advisement might be by a special office rather than a faculty member. In general, though, students have a way to get advice on planning the path to the degree.
There are limitations in that there is a core undergraduate set of courses that must be taken. Everyone takes a bit of, say, history, science, philosophy, writing, literature, etc. in addition to any chosen major.
Once you have a major, you have an additional set of core requirements. In math it will include analysis, algebra, probably topology, etc. These are normally started in the first years. The advisor has the duty of giving good advice on course sequencing.
Choosing a minor subject will also have core requirements and some later options. Where I studied a "minor" in philosophy was almost guaranteed due to the overall core requirements. A minor has, say, about half as many courses needed as the major.
A student in math and a student in literature will, perhaps, share a course or two in the first year, but not likely afterwards.
The final two years are more specialized with fewer required courses and more options in the major and (perhaps) any minor(s) chosen. All this is guided by the advisor who meets with the student for course planning and possibly other academic advice. One normally has the same advisor over all four years, but, perhaps not, especially if the major is chosen later. You want an advisor who is a faculty member in your major.
Note that the advisor is guided by somewhat flexible rules. A certain number of courses is required for this and for that and courses have some sequencing requirements, but exceptions are possible and not necessarily rare. I had an exception made for my foreign language requirement, for example, having studied both French and German, but not quite enough to satisfy the requirement for either. For that, the dean needed to approve and took my overall trajectory into account.
In the upper level courses you are likely to see many of the same students in your classes, but this depends on the size of the institution. My undergrad place was small, and I could depend on having two fellow (yes, fellow - all male college) students in all my upper level courses and a few others in some others. At a large place, this might not happen as often.
Even in US grad school with advanced courses at the start, you would see many of the same students in the courses intended to prepare for comprehensive exams.
Note that US higher education isn't like a production line, with everyone getting the same treatment and everyone progressing at the same rate. If students fail, they might retake the course or find an alternative, even changing majors. Retaking final exams after a failed course probably isn't an option here. I've never heard of it anyway.
The education is much broader and is intended to form an "educated person", not a technocrat, though some specialties have that characteristic (law, medicine...). The system in general has been described as producing a Renaissance Person.
---
Some detail, typical, but not universal as there are a couple of popular systems:
An undergraduate program might consist of 16 "credits" per term for 8 terms or 128 credits overall. This might be four or five courses per term or maybe six, with some having less weight and some more (lab science). Overloads are possible. I once did 21.5 credits in a successful term. A "credit" is roughly equivalent to meeting one hour per week in a course, so a 3 credit course has three meetings per week over about 15 weeks or 45 hours of instruction, including exams.
The major might be in the neighborhood of 48 or so credits or around 40% of the total. Many students take more in the major than the minimum. A minor might be around half of the major.
Some places (Dartmouth for example) use a different system with things divided up differently, but the totals roughly the same.
Upvotes: 2 <issue_comment>username_2: US education is not "one system" - every institution is potentially unique because they all run themselves. There is no national system. There are regional accreditation organizations, but these organizations don't care about the mechanics of scheduling.
>
> If so, how is a university able to create a unique timetable for every student?
>
>
>
At large US universities (i.e., those with incoming classes approaching or greater than 10,000 students), they certainly don't bother with anything like that for individual students. *Courses* come with a set schedule; students choose to enroll in those that fit their needs and schedule for each term.
When a department is choosing when to offer a course, they'll certainly consider likely conflicts with interested students. For example, incoming students in Biology likely need some version of "Biology 101" and "Chemistry 101" in their first semester; if they don't take these right away, they'll be behind the next semester as these are required prerequisites for the next semester of classes, and those for the next semester, and so on. Therefore, the Bio and Chem departments will talk to each other and offer these courses at different times; at a big university, there are likely many sections of each with different instructors. The same applies to various courses within a major; frequently there are classes that, say, every Physics student will take the first semester of their third year. They're not going to offer Physics electives typically taken by third-year physics students at the same time as that standard course, or no one would enroll in the electives.
A Biology student likely also needs to fulfill some sort of history requirement, but they have flexibility on when and how in their studies they do this. If they want to fulfill it specifically with History of International Basketweaving, they might not be able to practically take it the same semester that they take Reptile Dissection. In an extremely unlucky case, maybe they never have room in their schedule for it because one year it conflicts with Bio 101, the next year it conflicts with Organic Chemistry, then they miss it for Reptile Dissection and finally for their Bioinformatics Lab. That's likely okay, because they can fill their history requirement with a different course, but some students may choose or need to take additional semesters to complete all the coursework they want. It's especially likely to find conflicts when students try to complete multiple majors, because not every possible combination of majors can coordinate all their classes to not overlap.
Schedules tend to be fairly consistent from year-to-year: roughly the same courses, roughly the same number of students, so it's not like these things need to be revamped from scratch every year. When a new course is added to the schedule, the department will work with administration and look at the whole timetable of courses much the way a student would. There's also likely to be some pre-existing structure, like courses only start at certain times of day. As an undergraduate, all my classes on M/W/F started at 8, 9:05, 10:10, 11:15, 12:20, etc, designed to have a 50 minute course with time to get anywhere else on campus before the next slate began. Longer courses might occupy multiple time slots, and there were also options for evening classes that started around 5:30, lasted much longer, and met less frequently, all to accommodate part-time students/students with full-time day jobs/students with daytime childcare responsibilities.
There are also typically counselling resources for students, because yes, especially early on, scheduling can be intimidating for a new student. In practice, though, if a student is registering for, say, 3-5 total classes, probably at least 2-3 of them are pretty clear choices based on their major, so that the decision space is not so vast.
Smaller institutions may have much more systematic class times where most students from the same major take exactly the same courses, or major and elective credits are offered in separate semesters. I'm less familiar with these systems, but consider that some US institutions may only have something like 300 incoming students per year; in that sense, they're smaller than many pre-university high schools in the US.
Upvotes: 3 <issue_comment>username_3: One thing that makes the scheduling easier is that there are often multiple "sections" of a possible course.
e.g. there are might be Calculus I lectures on Monday/Wednesday/Friday at 9 am (section 1), 10 am (section 2), 11 am (section 3), and Tuesday/Thursday at 11 am (section 4) and 2 pm (section 5). Students can register for any of these five sections of the course and they're considered to be equivalent, although different professors might teach the sections.
At smaller colleges like my institution, many courses are offered in single sections. We take pains to make sure that known schedule conflicts are avoided. e.g. Vector Analysis (Math) and Electricity and Magnetism (Physics) are corequisites, so we check to make sure that there isn't a conflict. If it happens that Abstract Algebra is scheduled at the same time as Electricity and Magnetism, then a student who wants to take both will have to make a choice.
It's absolutely true that schedule conflicts like these can delay graduation for students. In the US, graduation rates are usually computed for 4-years and 6-years, with the 6-year graduation rate often twice as high as the four-year graduation rate. There is a growing trend of using software tools to construct conflict-free class schedules to help students graduate on time.
Upvotes: 2 <issue_comment>username_4: This varies greatly by institution and also, potentially, by your intended major. Some institutions admit undergraduates to a specific major directly, while others do not expect you to officially declare a major until later on, sometimes in the second year (often because selecting a major can have financial aid implications).
Within that colleges and universities have "general education" requirements that may also vary by major. Sometimes general education will be totally free choice, sometimes it will have categories with choice within them, and sometimes there will be specific courses recommended for specific interests or majors. For example, STEM majors typically will encourage you to use calculus or another course required for the major for your math general education requirement. One the other hand, they may not care which literature class you take. The English department, however, may want their potential majors to take one certain literature course but not care what math.
In some schools, large and small, there is a common first year curriculum for everyone (that may also include additional courses). [Columbia University is famous for this](https://bulletin.columbia.edu/columbia-college/requirements-degree-bachelor-arts/): "Six shared courses, in which all students study the same content and learn foundational academic habits of mind and habits of work."
Very small institutions may offer a cohort model. One extreme example is the [Webb Institute](https://www.webb.edu/). But, in fact, many colleges have "learning communities" in the first year in which a group of students takes some or all courses together. And once you are in a major, you will probably be moving through classes with many of the same students in the same order.
Many institutions offer "guided pathways" or "meta majors" in the first year. In these systems students are given options that will keep them moving forward but are intended to also move them along in their likely majors.
In sum, there is no one answer to your question because institutions vary so much from one to the next. You should research each institution you might consider (e.g. based on location, available majors, size) for its special characteristics and practices.
Addition in response to comments:
I can't really answer the questions in general because they depend on the institution. From personal experience at my institution they are more common for course-based graduate programs. Keeping a cohort together for four years, especially where students can fail courses, change majors, study abroad and so on would be challenging. A place like the Webb Institute are very small and offer just one program, so they are more able to do this.
Upvotes: 1
|
2022/12/09
| 3,172
| 13,541
|
<issue_start>username_0: I gave a test in person this week. For the students that require accommodations, I submitted it online. The staff that provides the accommodations at their facility printed the test and gave it to student "A". When the next student went to take it, she realized the test had all the correct answers marked. By mistake I had submitted the key instead of the regular test. This student immediately mentioned it to the staff that then gave her a fresh test, they did the same for the first student. But student A had the exam in their possession for about an hour and never reported the issue.
Was student A in the wrong by not reporting the fact she had the key in her hands? She first said she didn't realized the answers were bolded, she then said that she thought the bolded answers weren't the correct answers, but then she marked the exact same answers on the scantrons. Is this an academic integrity infraction/cheating? Should this be reported to the academic integrity committee?<issue_comment>username_1: No. There is no evidence of intent and you cant prove a negative.
The fault is on the person that handed it to her.
Lets reimagine the scenario. Teacher hands a student a test. The teacher then audibly speaks and tells the student the correct answer to the first question.
What is the student to do in that situation? There is no fault on the students side. She didnt ask. She had no reason to believe the answer was correct. Why would student have any reason to believe anything was amiss?
Upvotes: 5 <issue_comment>username_2: Student A was in the wrong to not report this. However, there was obviously no premeditation. As the matter was discovered after 1h, it is still plausible that the student would have done the right thing eventually. The root fault was yours, for submitting the solutions rather than the proper exam file. Thus, I would deem it inappropriate to punish Student A for cheating.
That said, I agree with <NAME> that their score must not stand. The exam result for Student A is obviously tainted. Usually, one would need to balance "global" fairness (to the entire student cohort) with individual fairness (resitting the exam is stressful and potentially detrimental to the student) here. But, as Student A's error in not raising the issue earlier was an essential contribution to the issue, I really don't see Student A being unfairly disadvantaged by having to resit the exam.
Upvotes: 5 <issue_comment>username_3: **Grey zone, check your institute's policies and get ready for a wrangle**
Here's an example policy:
>
> In its broadest terms, cheating involves a willful and fraudulent act on a student's part. That is, information is falsified or fabricated or work to be evaluated by an instructor is submitted by a student as original and unaided, when in fact an unauthorized source has been employed.
>
>
> ...
>
>
> Unauthorized assistance (collaboration with others, proscribed written materials) in completing work for academic credit, including but not limited to: take home exams, tests, or quizzes; lab reports; and homework assignments. Unless expressly allowed to do so by the instructor, students must be aware that they cannot use any aids in such situations. If a student is unaware of an instructor’s expectations, s/he must consult with the instructor.
>
>
>
[Source](https://www.siena.edu/offices/academics/academic-policies-and-procedures/academic-integrity-policy/academic-integrity-violations/)
The person pressing the cheating charge would argue that this is clearly a fraudulent act on the student's part, and that the exam was submitted as original and unaided when in fact unauthorized aid has been received.
The person defending the cheating charge would argue that providing the student with the answers turns the aid from unauthorized to authorized.
The person pressing the cheating charge would argue "Unless expressly allowed to do so by the instructor, students must be aware that they cannot use any aids in such situations".
The person defending the cheating charge would argue that being handed the answer key means the instructor is implicitly allowing the aid.
So, get ready for a wrangle - although I suspect the student will know they're in the wrong and will not press the issue until it reaches arbitration.
Upvotes: 1 <issue_comment>username_4: As username_2 laid out, there is a difference between "There is cheating" and "The exam result is illegitimate".
We should assume that there is a necessity of Mens Rea to judge something to constitute cheating. Clearly, the student had no will to cheat when handed the answer cheat. So, the question is, did the student engage in deceptive behavior when the student handed back the scantron answer cheat without saying something. No reasonable person can seriously argue that being handed the answer key instead of the exam constitutes a permission to use the answer key for the exam, since no reasonable person would be willing to give an exam together with the solutions under these circumstances. If the student received the answer key, copied it to the Scantron sheet, and then did not point out that there was an error, at that point deception happened and the student would be guilty of cheating. In an exam, there is a positive duty to point out any clear, unfair advantage. Thus, the important question is whether the student not saying anything, when there was a duty of saying something, constitutes cheating. This will depend on the reasonable expectation of the student. If the student for example wrote the student's name on the answer sheet with the key, and it is obvious that this was the answer cheat, then the student could have assumed that it is obvious what had happened, in which case there was no need to say something. From what you told us, it appears to me quite clearly a case of academic dishonesty.
There are some question of fact here as well as a question of law. The latter refers to the definition of academic infraction, which differs between institutions. These questions should not be handled by the instructor, but by the committee set up to deal with academic dishonesty.
There is also the question of sanctions. Clearly, a student who inadvertently was put into this situation had no premeditation. Furthermore, the student can rightly worry about the impact of your mistake on the student's success in the exam. These would at least be strong mitigating factors that would obviate the need for sanctions.
Another aspect would be the notion of entrapment, where a criminal act only happens because of actions by law enforcement. It seems to me that your negligence (however excusable it is), constitutes entrapment and as a legal principle, entrapment is a complete defense to a criminal charge. We are of course arguing by analogy here, as you are not law enforcement and we are not talking about guilt. Note also that different jurisdictions come to different conclusions, e.g. German law does not deal with entrapment the same way that US law does. Thus, according to US legal doctrine, the student can provide defense based on entrapment because the student had no predisposition to cheat and because many students, when put into this situation would not have said anything.
In summary: The student had a positive duty to tell the proctor that something was wrong with the exam sheet. The student's action therefore constitutes cheating. However, the level of guilt is minimal and one can argue that you accidentally entrapped the student, which would be a complete defense against accusations of wrong-doing. However, you should not have to decide this, this is what the competent committee is supposed to judge.
Upvotes: 3 <issue_comment>username_5: >
> Should this be reported to the academic integrity committee?
>
>
>
Yes. Report the facts, and let them figure out if it is cheating or not according to their standard of proof.
This is not the case here since it was an accident, but in general keeping dubious cases from the committee isn't good, as it prevents them to catch repeated offenders. It's best to have a single point of decision, also for uniformity.
Upvotes: 2 <issue_comment>username_6: I have to admit I am surprised at the other answers. In my view this is cheating plain and simple. The student is passing off someone's else's work as their own on an exam. That is cheating. How they got the answer key is irrelevant.
My first paragraph is a little strong. There may be some doubt that the student actually **used** the answer key. But I think a strong case can be made—at least for further investigation—since all of her answers matched the answer key. If the student is a strong student, and has a history of getting high marks, that increases the amount of doubt. If this test score differs substantially from her previous exams (or differs substantially from the performance of the highest scores from other students on this particular exam), that decreases the amount of doubt. Was she the only person to ace this exam?
Her account of the situation doesn't seem credible to me. First, there is her changing story (at first she said she didn't notice the bolded answers, then she she changed her story to say she thought they weren't correct). The idea that she thought "the bolded answers weren't the correct answers" defies logic. Why would they be bolded? Had any other exam in your course ever have some of the answers in bold? Isn't it odd that only one answer per question was in bold?
Why would this student say she "didn't think the bold answers were correct" in spite of the fact that she marked those answers on her scantron? Doesn't the fact that she marked those answers indicate that she *did* think they were correct? Isn't she contradicting herself?
Either way, this situation is suspicious *enough* that it should be reported as a possible infraction to the academic integrity committee. Let them decide if further investigation or other action is warranted. You don't need to be be both judge and jury. But you do have a responsibility to the school and to the other students to at least report activity that is very suspicious.
**Based on some of the comments, I think I should add:** This is not a zero-sum game where either the student OR the OP is at fault. They can both be at fault, but for different things. The OP is at fault for mistakenly releasing the answer key. But there is a very strong appearance the student attempted to *capitalize* on that mistake to get a grade higher than she would have earned on her own. If the student didn't recognize that the correct answers were in bold, it's unlikely her scantron would match the answer key, even down to the mistake on the answer key itself. If the student did recognize she had an answer key, then she is at fault for using it. Merely getting or having the answer key is not her fault. *Using it* to try to inflate her grade is the problem. If she knew she had the answer key, then she would have known that was a mistake, and she should have brought it to the attention of the proctor (as the other student did). *Getting the answers by mistake does not lessen her obligation to take the test honestly*, so that it reflects her mastery of the subject. (The previous sentence is what seems so self-evident to me, that I started the post stating that I was surprised at most of the other responses.)
Her only possible excuse is that she thought the test was supposed to be that way. Or that she recognized the bold options seemed to be the correct answers, but she tried to ignore those and answer the test on her own. The first reason is not credible. It fails the laugh test. This is not her first rodeo. She knows how tests work.
The second reason is more believable and IMO a valid excuse. Except in this situation, it's not what she said. And it strains credibility since her scantron matched the answer key, even down to the mistake on the answer key.
Nonetheless, she deserves the chance to explain herself. The Academic Integrity Committee (which at most schools has student representation) is the appropriate place to do that.
Upvotes: 0 <issue_comment>username_7: In my previous institution, there was a policy against people who helped others cheat, intentional or not. I don't think what she did was cheating, but *if* it was decided to charge her with cheating, get ready for an *abetting cheating by negligence* charge (or whatever similar name your institution has for helping cheaters) **against you**.
It would be completely unfair to punish her, but not you, for this incident.
Upvotes: 1 <issue_comment>username_8: Cheating itself requires at least two elements: knowledge that it *is* cheating and intent to cheat. In this case it is possible that there is just a misunderstanding exacerbated by the stress of an exam. There was certainly no *prior* intent to cheat.
To make a *charge* of cheating, requires *evidence*, not a "gut feeling" or a suspicion. More than "some likelihood". I think the evidence here is weak, since the situation you describe is (one hopes) very unusual.
I suggest that you let it go and be more careful in the future. Yes, some infractions occur and some can't be caught or punished, but in grey areas, the balance should be towards leniency rather than severity. That is to say that false positives are more harmful to the overall educational process than false negatives. An accusation of cheating can devastate an honest student.
Upvotes: 2
|
2022/12/10
| 2,176
| 9,160
|
<issue_start>username_0: A student often sends me emails asking obvious (homework) questions whose answers can easily be found in the teaching materials.
I do not always keep the teaching materials with me, so I cannot reply immediately to the student that, for example:
*(questions whose answers are in the lecture notes)*
>
> The answer to the question can be found in p.59, Lecture Note 5
>
>
>
or *(questions that can easily be solved using the teaching materials)*
>
> It is similar to what we did in the previous lecture. For this question, the only difference is that...
>
>
>
As the result, I always decided to do it later (not an urgent/important task), and sometimes forgot to reply. It often took me a few days for me to reply. It is actually fine if students ask me these types of questions during tutorial lectures as it is my duty.
As a teaching assistant, can I just ignore these emails from the student to save my time?
*Thanks for all the valuable answers. I realized I made a mistake. It is clear that the answers to the questions are in the teaching materials; however, it does not mean that students will manage to answer the questions despite the fact that the answers are already in the teaching materials. It is still possible that students may not understand the proposed solutions.*
*I will also suggest the instructor to create a Q/A section on Moodle for students enrolling in the course in the next academic year. It may be very helpful.*<issue_comment>username_1: If it is a single student doing this, then you might try a different approach. The student has some issues/problems that you aren't recognizing and I can't diagnose. But a sit-down with this student might help them learn how to learn on their own without so many questions you tag as obvious, but they may not.
If it were many students, not one or two, then my advice would be different, but here, the student seems to be drowning and you could throw them a life jacket or (better) help them learn to swim.
I was surprised to learn that some fairly advanced undergraduates had never really learned how to learn and when things got hard had difficulties. In my case it was a failure to take notes in class, depending on memory too much. That worked for them in the past, but didn't in more advanced courses.
Because of your position as a TA you have a certain way of thinking that is effective. Others haven't reached that level yet and it is good to recognize it. Don't assume that your students are just like you. They aren't.
If you just ignore their emails, they are unlikely to make the progress that they could and which it is your responsibility to assist.
I also advise folks like yourself to give minimal hints when asked questions. You seem to already recognize this by pointing them to relevant resources. Just enough to get them past a block - or to recognize the nature of the block - is better than a full answer.
But, a sit down might give you insight into the real problem they are having, which might make it go away.
Upvotes: 7 [selected_answer]<issue_comment>username_2: **Never ignore emails without first saying you are terminating the conversation.** It's rude, and it leads to all sorts of problems. Even something like "come to my tutorial lectures and ask me the questions there" is more useful than no response whatsoever.
Just check out how many questions there are on Academia.SE about "[no response to email](https://academia.stackexchange.com/search?q=no+response+email+is%3Aquestion)" or "[no answer to email](https://academia.stackexchange.com/search?q=no%20answer%20email%20is%3Aquestion)". As of time of writing, there are 180 results of the former and 229 of the latter, out of 40,207 questions asked. In other words, a full ~1% of academic problems stem from people not answering emails and leaving the other person in the dark.
Don't be one of them.
Upvotes: 6 <issue_comment>username_3: I suggest not to ignore any emails from the students. You as a teacher might be finding it easier, however the student might have some problems and they are not able to find out from the teaching materials.
It will be better to sit with the student and ask them to change their approach towards doing homework so that they will understand how to use teaching material effectively and complete the homework.
Upvotes: 3 <issue_comment>username_4: Do not ignore emails, even ones that may be repetitive or ones that feature "obvious questions". It's unprofessional, may lead to problems on your side and, most importantly, may lead to some students falling behind simply because they were not attended to properly. Obviously, you're teaching adults and you don't need to babysit them, but all of us have gaps in our knowledge and any student may struggle with parts of the curriculum, guidelines or any other things that may be obvious. Ideally, you'd either spot these gaps and try to reformat your approach to curb these questions in the future or, if this is an isolated incident with one particular student having trouble, try and boost them, providing information in a simpler way or simply reiterating things. Yes, it takes time, yes, it's tiring. But you can't assume that a student is being deliberately ignorant or isn't worth your time. Such bias is unbecoming.
Upvotes: 4 <issue_comment>username_5: It would be unprofessional to simply ignore the student's emails unless you're absolutely sure you can't reply without coming across as arrogant, condescending and unprofessional. Keep in mind as a TA the answer may be obvious to you but a student's role is one of learning so it isn't always going to be obvious to them. A teacher's role is to guide the student. Answer the question (no matter how stupid you think it is) but also let them know where they can find the answers in the future and point them in the direction of tools they can use to help them succeed not only in class but also in life.
Don't simply say "Answers are in the lecture notes" or "The answer to the question can be found in p.59, Lecture Note 5" or "These questions can easily be solved using the teaching materials" or "It is similar to what we did in the previous lecture." It's very blunt, comes across as condescending and implies that you think the student is an idiot and you consider yourself a superior being.
Upvotes: -1 <issue_comment>username_6: ### Don't ignore them.
What's obvious to you might not be obvious to others.
* An answer that's clearly stated in the lecture notes should only necessarily be obvious to someone who's memorised (and properly mentally indexed) every word in the lecture notes. Otherwise such things can be quite easy to miss.
* If something's a similar to what's been covered in a lecture, with a minor change:
+ they may have forgotten that part and/or didn't think to review that
+ they may not have understood it properly
+ they may not have spotted the similarity
+ they may have missed that lecture
+ they may feel overwhelmed and just don't know what to do (at which point they may be unable to think of even the most obvious thing)
If they just thought asking you is easier, that might be a reason to ignore them *eventually*, but it would be difficult to establish that this is the reason initially, and it may still be polite and useful to reply, even if it's just to establish that asking you questions they can easily figure out, doesn't give them easier answers.
### Don't (always) just give them the answer.
Teachers shouldn't just teach the information contained in the course, but also, when necessary, help students learn how to learn and figure things out.
Sometimes just giving an answer is the best option.
Other times, this could or would stifle their ability to learn to figure it out themselves. In such cases, you may still want to point them roughly in the right direction (e.g. tell them it's in the lecture notes, even if you don't say exactly where), while leaving the door open to future questions.
This may involve a back-and-forth where you repeatedly point them in some direction or ask them a leading question, and this leads to them making some progress. There's probably a lot more to be said about that, but I'll just leave it there.
This type of teaching may not work that well over email, as you may come across as just unhelpful, in which case they may just not send you another email. I might suggest pointing them in the right direction and then, for example, inviting them to your office hours if they have further questions.
Upvotes: 2 <issue_comment>username_7: Others have suggested that responding to the emails may be a good idea, if you have the patience or energy. I generally agree. However, here are some other pragmatic approaches to consider:
* Hold regular office hours at a pre-announced time. When someone emails you, you can give them a short reply and politely ask them to come to the office hours.
* Ask the instructor to set up a Campuswire or a Piazza forum. This will let the students post their questions on the forum, and often, there will be other enthusiastic students who will help you answer them.
Upvotes: 2
|
2022/12/10
| 568
| 2,563
|
<issue_start>username_0: People generally agree that grades don't matter in graduate school because they are reflective of your understanding of other people's work. What matters is the work you did and your research.
But could it be said they matter to the same extent as undergraduate grades when applying for a job? For example, if an employer was screening candidates for a data science or software engineering position and one candidate was a graduate student and another an undergraduate student and both had a similar GPA would the graduate student be considered a stronger candidate?
If employers consider the grades in courses relevant to their position for undergraduates, so on that basis if you do well in graduate courses related to that position shouldn't you be given preference or at least seen as an equally strong candidate?
Although it's true that the expectations for graduate students should be higher than for undergraduates, if two candidates are being considered then the graduate student will probably be more knowledgeable if they have a similar GPA and other experience.<issue_comment>username_1: Statements that "grades don't matter" in graduate school apply to *academia* and *academic jobs* as a general statement and refer to people pursuing PhD degrees. The reason is that research output is just vastly more important than almost anything else for research-based jobs (teaching-focused jobs may be a bit different, but are still likely to focus on evidence someone will be a good instructor).
People in industry may or may not have any idea what is and isn't important in academia, and can hire by whatever criteria *they* think are important. If they don't see any value in a PhD degree that's up to them.
Since there is no one person in charge of all industry hiring, those criteria will vary.
Upvotes: 2 <issue_comment>username_2: Academic employers typically do not have much flexibility in positions with respect to the education level attained.
Industry employers do not care that much about the marks. Doing well academically is an argument a person could leverage, but any kind of portfolio beats that, so it is only applicable for entry positions.
Where grades likely matter the most is the intersection of industry and academia, when companies like NVidia would approach universities to find people for R&D positions - and in this case more than just GPA goes into the decision. And we are back to square one - actual skills matter, and the grades are a pretty poor predictor of them.
Upvotes: 2 [selected_answer]
|
2022/12/11
| 1,343
| 5,790
|
<issue_start>username_0: While I understand this is not a legal help site and it may differ from university to university, I am curious if anyone has any thoughts on this.
I am in an undergraduate programme in a life sciences-related field. My friend and I have an idea for a genetically modified organism product that we would like to bring to market.
We plan to make the full product elsewhere, but we need a proof of concept which we could easily create in the lab we both work in, just using leftover chemicals/DNA that would be disposed of if not for us using it. We simply want to create one organism to prove it could work, so we can then get evidence and support to complete the project elsewhere.
However would there be legal ramifications for just making the proof of concept in this lab?<issue_comment>username_1: You would need to negotiate an intellectual property agreement with the university's technology transfer office (and other interested parties). Regardless of legal necessity, if you want investors to put money into your business, you should have that agreement available for the investor to examine.
Upvotes: 3 <issue_comment>username_2: Apart from the legality of who owns the product, and the issue of using the resources and equipment without permission, there are other considerations.
You would be doing biological research. You give very little detail other than potentially a novel GMO is involved.
Usually such things have at least a minimal review for ethics and safety before they start. Possibly it would be fairly trivial, depending on what actions you were considering. But there are *MANY* bilogical processes that the university would be maximally upset to find out you had done without the review in advance. And in the most extreme cases, they could be induced to come after you with legal consequences.
Upvotes: 4 <issue_comment>username_3: State universities in the US have a mandate to support and foster the local economy by developing research and technologies that can be economically exploited. If you have a good idea, the university might be quite open to it.
However, you need to go through the proper channels in order to use university resources. First step would be to talk to the supervisor of the laboratory, who would also be more familiar about the procedures that need to be followed. You might also want to write up your idea beforehand in as much detail as you can muster and have someone else sign each page, as it is not unheard of for discussion about priorities etc. to ensue later.
Upvotes: 3 <issue_comment>username_4: I would offer a more general guideline: if you are using someone else's resources in any capacity whatsoever, it is only prudent to **negotiate in advance** what happens to the products of your work.
This applies not only to startups, but also to publishing results of your previous work without giving any measurable return on investment (affiliations, co-authorships) to the lab you performed it in.
It is exceedingly rare for the work to be blocked altogether, because then no one wins, everyone loses. The crucial detail here is both sides explicitly agreeing that the deal is fair. It need not be purely monetary, either, but IP has value.
Look at it from a managerial perspective: you take in taxpayer dollars and produce something of value to the taxpayers. Overseeing agencies like to see funding statements on all the things the lab has produced, and they are often fairly specific. As a manager, I get a pat on the back for the lab being efficient, and am generally interested in you doing cool things with the equipment which taxpayers could perceive as a good use of their money. If you do not provide me with grounds to say "hey taxpayers, we did this, is this not great?", then for all I care, you just took the money and ran off. Not great. There is another caveat, too - like username_2 points out, if something unethical or substandard happens on my watch, I am liable for that and very much not happy about it!
All in all, you should **do everything completely above the table**, and it may mean jumping through more hoops than otherwise. But these hoops are in place *for a good reason*.
Indeed, you face a risk of investors clawing back a lot of your hard-earned money, but "either you get a little, or no one gets anything" is not too bad of a negotiating position. There are, indeed, businesses which became successful because of appropriating something valuable at the start, but doing so sneakily is certainly not the way to go.
Upvotes: 4 <issue_comment>username_5: This is not a legal help site, and the only correct answer is to find a competent IP lawyer in your jurisdiction to review your contract with your university. It is very possible that using university resources to create new IP would result in you losing some or all of your IP rights, so this is not something you want to take lightly if you ever hoped to start a business or sell your idea. Even just having doubts about the legal ownership of your IP could be enough to make it effectively worthless.
Upvotes: -1 <issue_comment>username_6: This is not a direct answer to your question, but *apropos* this topic, you might be interested to note that <NAME> has written a lot about how various industries have used the university system as a source of tax-funded research whose products they then privatise for their own firms (for a gentle introduction, see one of his [talks on this topic](https://www.youtube.com/watch?v=a4ZIhW5ZpaE)). This is particularly common in military research but it also occurs in technological fields across many industries. So, the cynic in me says: the legality of this practice depends on your ability to use the political system to do it at scale.
Upvotes: -1
|
2022/12/11
| 501
| 2,058
|
<issue_start>username_0: For my statement of academic purpose I want to highlight that I have a work on arXiv which has been submitted to a journal. How would I call such a work? Is the term "submitted publication" appropriate?<issue_comment>username_1: Bibliography styles usually give provisions for that: see e.g. [this APA FAQ](https://libguides.jcu.edu.au/apa/what-if). [MLA has an entire collection of them](https://style.mla.org/tag/unpublished-works/), [here is one for Chicago](https://guides.library.uq.edu.au/referencing/chicago17-notes-bibliography/manuscripts).
Pick your poison.
EDIT: Lumping unpublished works together with published ones in academic submissions may not be okay overall. This will then depend on submission guidelines. I am familiar with ones where it is okay to include something which has passed the revisions and was accepted, but not yet published. Otherwise, it is fine to reference it as any other preprint.
Upvotes: 1 <issue_comment>username_2: Let me suggest a different approach. In the CV, create a section for "Work in Progress". List this paper (yes *paper*) there and mark it with "submitted" or "under review" or similar. List other things you have going. You can do this whether or not there is a public-facing version of the paper such as at arXiv.
Then, you only need to mention it briefly in the SoP, which should actually be focused on the future, not things you've done in the past. Mention the paper there only as it supports your future goals for study and thereafter.
A work in progress section is an advantage, IMO, in any application.
Upvotes: 3 <issue_comment>username_3: You refer to this work as 'submitted manuscript, currently under review'.
Optionally you can add 'at journal X' although opinions differ whether that's tasteful or not. Might depend on your subfield.
If you want you can add where the preprint is available. That's generally useful but its utility depends how much space you have and whether that information is already accessible elsewhere in your application.
Upvotes: 0
|
2022/12/11
| 1,206
| 5,100
|
<issue_start>username_0: I have been paid to develop a piece of code which is being used in a scientific paper. The code is an important piece of this paper, since the paper is about a specific (novel) computational method.
Would it be fair to be included in the list of authors even if I have been paid for it?<issue_comment>username_1: If you developed bespoke software for a particular project, it would not be unreasonable to include you as an author. If the software you have developed is more general, you clearly can't expect to be an author on each paper resulting from its use. Whether you are getting paid or not is irrelevant.
Whether you really should be included, or whether it is a bit of a grey area, depends on how much intellectual contribution went into the software. If you took a precisely defined algorithm and done a straight-forward implementation of it, I can't find much fault in denying you authorship. If developing the software included coming up with some details of the algorithm, I'd say you definitely should be an author.
In the comments, Lodinn mentions the [CRedit](https://www.elsevier.com/authors/policies-and-guidelines/credit-author-statement) guidelines, which explicitly include a "Software" category for author contributions.
Upvotes: 4 <issue_comment>username_2: The question you specifically ask relates to fairness, which calls for a rather subjective response. In my view it would be fair for you to be included among the authors, but I also know that some research teams would be more inclined to list your name in the acknowledgements than to put your name in lights along with theirs! On the issue of fairness, I think that the best you can do is argue your case with the other authors.
There are, however, two other issues that are distinct both from each other, and from the issue of fairness. The first is the matter of so-called [moral rights](https://en.wikipedia.org/wiki/Moral_rights); the second relates to the policies of different journal publishers. In those jurisdictions that recognize moral rights, perhaps the foremost right is the to be attributed as the author of a work. For example, in Australia, moral rights (generally) are granted under Part IX of the [Copyright Act 1968](http://www.austlii.edu.au/cgi-bin/viewdb/au/legis/cth/consol_act/ca1968133/), and the right of attribution is granted under [s. 193](http://www.austlii.edu.au/cgi-bin/viewdoc/au/legis/cth/consol_act/ca1968133/s193.html). However, it is not all as simple as might first appear. In your particular case, for example, your moral right to be attributed as the author of the software would not arise merely as a consequence of the software being discussed in the paper. Instead, your moral right of attribution would come into play only if the software itself were being published or distributed. In that case, under Australian law, you would be entitled to require that you be named as the author of the software, *even though* you had been paid for your work, and *even though* you no longer owned the copyright to the software.
In addition to the matter of moral rights, there is also the question of the policies of different journals. Most publishers require that anyone who makes a substantial contribution to a paper be included in the list of authors but yet again, it is never as simple as first appears. Some journals will view the development of an important piece of software as fulfilling the criteria for a substantial contribution, others will not.
It is worth mentioning that statistical consultants often find themselves in a similar position to yours. Much of the work that statisticians do is viewed by some researchers as "mere technical assistance"; other researchers will recognize it as having been essential to turn data into useful conclusions.
Upvotes: 3 <issue_comment>username_3: Let's say I read this paper, and the code. After reading everything published, what is my impression? Do I think the authors of the paper wrote the code? Do I think Andrea wrote the code? Do I think the authors of the paper hired someone to write the code for them, or found some publicly available code?
In the first case, if a reader believes falsely that the authors wrote the code, then we have a case of plagiarism, which is a serious problem for the authors. Everything else: You invested your work, and you want something out of it. I do that kind of work purely for the money. Pay me, and I'm fine. You may have done this work to improve your reputation. That's something that needs to be agreed upon beforehand, because you likely traded money for the hope of reputation. I wouldn't necessarily recognise you as a co-author, but definitely add something like "Software designed and created by Andrea".
BTW. In many countries there is a "right of authorship", which means if you wrote it, then you and nobody else has the right to say "I wrote it". That right cannot be sold. The copyright allows them to distribute your code and charge money for it if someone is willing to pay. It doesn't change the fact that you wrote it.
Upvotes: 2
|
2022/12/11
| 1,412
| 5,962
|
<issue_start>username_0: I am a math postdoc who defended his Ph.D. two years ago.
I have 9 papers and 3 of them are with co-authors. In other words, I am a single author of 6 papers of mine. Two of my papers where I was a single author were published.
Recently, one of my papers was rejected after 8 months because the referee believed: 1) The novelty might not be sufficient for a journal, and 2) the writing contains inaccuracies at places.
I don't have any problems with the first issue as they can have a different opinion. But my problem is the second issue. I found out that all of my papers contain minor problems, such as proving something under compactness assumptions, but not mentioning that, or mix of quantifiers, or using a notation for two objects. Long story short, even though all arguments and proofs are overall correct, there are stupid errors. These stupid errors get my papers rejected from time to time.
What I usually do, when I finish a paper, is not to look at the paper for several weeks, and then look at it again. But again, there are some stupid errors. From time to time, I also ask some experts to look at my papers, but either they don't go through my papers, or they don't look at them very carefully. In general, I found out I shouldn't expect anyone to look at my paper carefully.
One more thing, I know I should work with experts instead of working alone, and I tried, but experts have several things on their plates, and take several years to finish a paper; I don't have that much time due to my short contract. But I have been working with some experts. We finish the result a year ago, but she hasn't finished the writing!
My question is how I can reduce these problems, which I called stupid problems, in my papers?<issue_comment>username_1: You are simply noticing a common experience that is pervasive with writers. It is extremely difficult to proof your own work. Waiting a few weeks between edits is a start, but it isn't likely to be entirely effective.
The solution is to have someone else, not involved in the writing, to give you feedback.
The problem is that you have a certain mind-set and a way of thinking. That is natural. When you read what you have previously written you naturally adopt the mindset that you had when you first wrote the work. Thus, when you read over a passage you are all too likely to "see" what you think you wrote, not what you actually wrote. The "correct" version is in your head and you sort of skim over the incorrect version, not seeing the erroneous detail.
It isn't at all that you are unsuited to academic writing. It is that the human mind is very complex and has connections beyond what we experience explicitly at the moment.
Multiple edit reviews of your own will catch a lot of these mistakes, but not all. I've found some incredibly stupid errors in fairly simple math writing. I meant "intersection" but wrote "union". It took several readings to catch it and it was an error that resulted in complete nonsense.
Work with a colleague and offer to pre-review one another's papers and give feedback on issues large and small. They will notice things that you don't.
Not actually writing an assumption (compactness) is a nearly perfect example of this. "Of course, compactness is required."
Reviewers can also do the same, actually, if they know a lot about a subject. The mind can "override" the eyes.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Getting better at proof-reading**: The "fresh eye" after a few weeks break is just the start for proof-reading properly. It allows you to actually read what is written (rather than "reading" what you remember writing). But to properly proof-read a mathematical proof, you need to disbelieve it. Put the onus on the text to overcome that disbelief. This will not just help you catch mistakes, but also identify parts of the explanation that need improvement.
**Getting other people to proof-read**: Reading a paper and proof-reading it are two very different things (see above). The latter takes much more work. If you give your draft to some expert in the area, you may get comments like "This reminds me of this paper here, maybe you should have a look.", but not really proof-reading stuff. (And if they did, I'd feel equally embarrassed about it.)
What you want is to get someone to explicitly proof-read your stuff. Ideally, this person is someone who can understand your paper with some effort, but is NOT an expert in the area. This would naturally put them in the right mindset. You should offer something in return, eg to proof-read their papers.
**Getting collaborators**: Someone who proof-reads because their own reputation is on the line, too, will probably be more effective than someone doing it as part of a deal. Also, already before the proof is written, the steps of mutually explaining proof ideas to one-another in a collaboration helps identifying the exact assumptions.
You don't need to look for more senior people as potential collaborators. Working with fellow postdocs or PhD students can fill the gap just as well (if not even better).
Upvotes: 1 <issue_comment>username_3: I recognize the problem of others agreeing to review your work, but then not reviewing it thoroughly at all. I'm not sure if you can expect them to review it thoroughly either way...
A quote that may somewhat apply here is: "Ask a programmer to review 10 lines of code, he'll find 10 issues. Ask him to do 500 lines and he'll say it looks good." (-not sure who said this first)
If you ask someone to review a whole paper, they will probably be overwhelmed and find it too much work to review it thoroughly. Maybe you can try asking someone to thoroughly review only a small part of your paper (e.g. only a single proof). Then you can ask someone else to do another part, etc. This is likely less overwhelming and can be done in much less time, making them more likely to do it.
Upvotes: 2
|
2022/12/11
| 1,097
| 4,747
|
<issue_start>username_0: There are different metaheuristic algorithms for evaluating optimization benchmark problems, some give better results than others and some don't. If a novel metaheuristic has a very close fitness function value to another algorithm created before, let's call A, but is not better than the fitness value of A, what could be the outcome of peer-review process? Would you accept it or there is a rejection possibility for the paper if you were one of the reviewers? Can you explain it by showing an example if it is affordable?<issue_comment>username_1: You are simply noticing a common experience that is pervasive with writers. It is extremely difficult to proof your own work. Waiting a few weeks between edits is a start, but it isn't likely to be entirely effective.
The solution is to have someone else, not involved in the writing, to give you feedback.
The problem is that you have a certain mind-set and a way of thinking. That is natural. When you read what you have previously written you naturally adopt the mindset that you had when you first wrote the work. Thus, when you read over a passage you are all too likely to "see" what you think you wrote, not what you actually wrote. The "correct" version is in your head and you sort of skim over the incorrect version, not seeing the erroneous detail.
It isn't at all that you are unsuited to academic writing. It is that the human mind is very complex and has connections beyond what we experience explicitly at the moment.
Multiple edit reviews of your own will catch a lot of these mistakes, but not all. I've found some incredibly stupid errors in fairly simple math writing. I meant "intersection" but wrote "union". It took several readings to catch it and it was an error that resulted in complete nonsense.
Work with a colleague and offer to pre-review one another's papers and give feedback on issues large and small. They will notice things that you don't.
Not actually writing an assumption (compactness) is a nearly perfect example of this. "Of course, compactness is required."
Reviewers can also do the same, actually, if they know a lot about a subject. The mind can "override" the eyes.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Getting better at proof-reading**: The "fresh eye" after a few weeks break is just the start for proof-reading properly. It allows you to actually read what is written (rather than "reading" what you remember writing). But to properly proof-read a mathematical proof, you need to disbelieve it. Put the onus on the text to overcome that disbelief. This will not just help you catch mistakes, but also identify parts of the explanation that need improvement.
**Getting other people to proof-read**: Reading a paper and proof-reading it are two very different things (see above). The latter takes much more work. If you give your draft to some expert in the area, you may get comments like "This reminds me of this paper here, maybe you should have a look.", but not really proof-reading stuff. (And if they did, I'd feel equally embarrassed about it.)
What you want is to get someone to explicitly proof-read your stuff. Ideally, this person is someone who can understand your paper with some effort, but is NOT an expert in the area. This would naturally put them in the right mindset. You should offer something in return, eg to proof-read their papers.
**Getting collaborators**: Someone who proof-reads because their own reputation is on the line, too, will probably be more effective than someone doing it as part of a deal. Also, already before the proof is written, the steps of mutually explaining proof ideas to one-another in a collaboration helps identifying the exact assumptions.
You don't need to look for more senior people as potential collaborators. Working with fellow postdocs or PhD students can fill the gap just as well (if not even better).
Upvotes: 1 <issue_comment>username_3: I recognize the problem of others agreeing to review your work, but then not reviewing it thoroughly at all. I'm not sure if you can expect them to review it thoroughly either way...
A quote that may somewhat apply here is: "Ask a programmer to review 10 lines of code, he'll find 10 issues. Ask him to do 500 lines and he'll say it looks good." (-not sure who said this first)
If you ask someone to review a whole paper, they will probably be overwhelmed and find it too much work to review it thoroughly. Maybe you can try asking someone to thoroughly review only a small part of your paper (e.g. only a single proof). Then you can ask someone else to do another part, etc. This is likely less overwhelming and can be done in much less time, making them more likely to do it.
Upvotes: 2
|
2022/12/11
| 999
| 4,498
|
<issue_start>username_0: **I have accepted a journal paper for a peer review for the first time. However, given a very limited time to review it, I’m no longer confident about whether to continue or come back from it.**
The subject of the paper is my field of research for years. Also, I have many publications in the same area (optimization in wireless sensor networks). That is why I was excited about the experience and decide to accept reviewing the paper. Especially since I was waiting for the opportunity to take a first paper to review.
The problem is that I didn’t give much importance to **the due date which is of 20 days**. I thought they were enough time to review something you already know. And I didn’t know that practically, a review takes at least one month to three months, till now.
I am in the middle of the required period and I start doubting my decision on accepting to review. I want to ask what should I do.
Should I send an email to the editor explaining my position and declining the review? In that case, wouldn’t that affect my credibility and image in the community?
Or should I continue with the review and do as much as I can with the remaining time? In that case, I could perform a not very high-quality review (for the reason of the very short given time). Wouldn’t that affect again, my image and give a false idea about my capabilities? Another important point is that I could “harm” the paper author and not be “too fair” with the decision on his paper.
So, **to resume my question,** I have only 10 days or less left **to review a journal paper for the first time**, and don’t know if it is enough time even for a paper in the heart of my research area.<issue_comment>username_1: Yes, contact the editor with your concerns. I'll guess that the "deadline" may be more flexible than you think unless it is for a special issue. That seems unlikely and a failure on the editor's part not to provide more time initially.
If you can, give an estimate of how much time you'd need and mention the possibility you might need to withdraw.
The editor has a lot of experience with such things. It isn't an unusual case.
The "appropriate" length of review depends on many things, especially the complexity of the paper. But editors realize that their reviewers are also busy.
Upvotes: 2 <issue_comment>username_2: I would recommend you connect with experienced people in your field to check how they conduct peer review. If everyone in your field would take 1-3 months full-time for a review, I suspect there would be very little actual research being done because everyone would be busy with conducting peer reviews only. If the editor sets the deadline to 20 days, then they also seem to expect you to work not more than a few days on it.
I am aware that theoretical math is much more difficult to review than more applied research, but the effort that you describe seems far off the scale. In my field, most people that I know spend approx. 2-8 hours for a review, depending on its complexity. Whether 20 days are sufficient totally depends on ones personal schedule. Of course, it is an absolutely legitimate request to extend the deadline if it seems not possible to finish the review within the suggested time. And it is always better to communicate time problems directly that to wait until the deadline extends.
If it is your first peer review, you should definitely seek for guidance in order to do it well, e.g. from your supervisor. At my university, there is even a semester-long peer review course, where people are taught to perform good peer reviews and conduct several real peer reviews for highly reputable journals throughout the course. I think it is a big mistake of the research community to simply assume that everyone knows how to perform a good peer review just because they have published research themselves.
Upvotes: 0 <issue_comment>username_3: Reviewing a paper for the first time, even if there are lots of technical details to check, I would expect to take between two days and one week. The reason that a typical deadline is three months is that it may take a long time before you have those few days available for reviewing (especially if you have other papers waiting for review). So I would interpret a deadline of 20 days as basically saying "if you have time available now or in the near future to set aside for this review, please accept, but if not we would rather ask someone else to get a quick turn around".
Upvotes: 2
|
2022/12/11
| 953
| 4,292
|
<issue_start>username_0: You are not allowed to directly post material from a professor's lecture but any notes you take are considered your intellectual property. By the same token if you take an exam and remember a question can you post it online after taking the exam to know whether you did it correctly? The numbers and wording can be changed but the question will essentially be the same.
According to an answer to a related question ([Is it ethical to post an exam question and answer to the internet?](https://academia.stackexchange.com/questions/140572/is-it-ethical-to-post-an-exam-question-and-answer-to-the-internet)) this is acceptable but I find that questionable since although the wording is changed the idea is the same. But I think that it's not necessarily a disservice because although specifically those who take the same class in the future, if they come across the question, may encounter it on their exam if the professor reuses the question, most people who encounter it will learn from it.<issue_comment>username_1: Note that plagiarism is about ideas not specific wording. If you post it and cite it then it isn't plagiarism.
The professor (or the university) hold copyright on the material as they do on all other writings. So, you should, in principle, ask for permission. One can violate copyright with rephrasing, not just with quoting. Note that an important aspect of copyright law is whether a "copy" makes the original less valuable or not.
However, there is another question which is whether you do future students a service or a disservice by doing so. If your posting is used to increase grades at the expense of learning then it is, IMO, a disservice. Reuse of questions for exams is a fraught activity, of course. The world would probably be better if it weren't done, but in some fields that is difficult. If you want to be of service to learning, then I'd suggest that you don't do it.
In a field like mathematics, there is a vast difference between reading a solution and producing one. Taking "exams" knowing all the questions and answers in advance won't give a student the feedback they need to know whether they have the needed insights. I can imagine an "All A" student who knows nothing of the subject.
You haven't tagged this with [ethics](/questions/tagged/ethics "show questions tagged 'ethics'"). I hope you consider that an important issue.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Assuming that the exam questions are not intended for release, it is possible that the university would view this as an infraction against the general requirement of "academic honesty" or "academic integrity". Depending on additional context or actions that accompany this activity, they might even view it as being an inchoate attempt to cheat in a future exam (e.g., by making the question available to another student who uses this to gain advanced knowledge of questions for a future exam). It is also possible that the university would consider this to be a case of improper collusion in the conduct of assessable work, which is also generally regarded as an infraction against academic integrity.
Many universities have strict requirements on when a student can take possession of the exam questions after the exam (e.g., special coloured sheets for exams that cannot be retained) and the goal of this is to ensure that the questions on those exams do not become available to students, so that the same questions can safely be asked again in future exams. When students collect and share these questions it has the potential to undermine the integrity of future examinations that use those questions. This is generally considered to be an illegitimate form of cooperation in assessable work, so it may be viewed as collusion or as a general breach of academic integrity.
In the case where the exam questions are released by the university, the situation would be different. Universities publish some of their exams on repositories that are available to students, and if the exam falls in this category then there is no issue of collusion or academic integrity in posting a question yourself. The only issues to be aware of in this latter case are copyright law and appropriate conventions on citation of your sources.
Upvotes: 3
|
2022/12/12
| 724
| 3,174
|
<issue_start>username_0: I have recently applied to postdoctoral positions, the applications for which are still under review (or will start the review process soon, based on their deadlines). In the mean time, I have got calls from faculty members at a few institutions I've applied to, asking if I'd be interested in working with them.
For me, the answer is always "yes!" (that's why I applied to those places!). However, I do *not* wish to commit to them without first knowing the status of all my applications. On the other hand, the faculty members also aren't making official offers yet --- they *verbally* say they would love for me to join their groups as postdocs, etc. It's not clear to me if they are simultaneously scheduling such calls with multiple candidates.
What is the best way to express interest and enthusiasm without committing (or appearing to do so)? I fear that overcommitting and then switching to someone else would massively burn bridges (besides feeling really wrong to me); on the other hand, I fear that not appearing enthusiastic enough on such calls might make these faculty back out of making me an official offer.<issue_comment>username_1: Normally, one asks what the next steps are.
* I am happy to hear that! Yes, of course I am interested! [maybe insert reason why you are interested here] What are the next steps, will there be an interview or something?
Asking for concrete steps conveys interest and at the same time gives you information about the stage and timeline of your application (i.e., by when you need to decide). Additionally, every hiring committee should understand that "I am interested" does not mean "I am going to sign whatever you send me, no questions asked".
Upvotes: 6 [selected_answer]<issue_comment>username_2: I wouldn't call these "verbal offers". I would call these "preliminary interviews". Treat them the same way you would treat another interview for which you haven't gotten a formal offer yet. Yes, you applied, but until they haven't committed to you, you can't possibly be considered to have committed to them. Nobody reasonable will hold a grudge to you if you apply somewhere, but then decide on another job who gave you a formal offer.
Upvotes: 4 <issue_comment>username_3: In my field, there is a [widely accepted agreement](https://www.ams.org/profession/employment-services/deadline-coordination/deadline-coordination) that institutions will not require people to accept their offers before a particular date (Feb 6 2023 for this round). The purpose of this is exactly to protect applicants from having to make choices before they have all information at their disposal.
So (a) see if your field has something like that, and (b) it's absolutely understandable if you tell prospective employers that you won't be able to make a final decision before (the date you decide on) because you are waiting for information from other potential employers. I would consider it unreasonable for any employer to find this objectionable.
(This advice is in addition to the good advice in the other answers that communication that isn't explicitly an offer shouldn't be treated as an offer.)
Upvotes: 0
|
2022/12/12
| 2,198
| 8,796
|
<issue_start>username_0: I have already finished my Mathematics Ph.D. application form for several universities. However, I noticed that my email address in my CV and personal information contains the number "666", which, I recently learned, is the number of devils. I'm not a native speaker of western countries. I'm from Asia and "666" means auspicious in my language. I've used this email address to apply for Ph.D. programs at several universities in Germany and the U.S. I'm wondering whether this would make me get rejected by universities' committees.<issue_comment>username_1: **Don't worry.**
I would find it extremely surprising if a European reacts very negatively to 666. Most people would be somewhat aware of a connection to "The Beast" in the biblical book of revelations, but they wouldn't really care. (I've grown up in Germany and spend most of my career in the UK, for reference.) On the other hand, it does not seem to be that rare for people in the US to feel eg uncomfortable with a bill of $6.66, and to buy a little bit extra to avoid it. But even in the US, I expect that the overlap between people doing that and people sitting on admission committees for Math grad schools is small.
Your email address isn't really what people evaluate, and most will just ignore it. Having numbers in your email address may be read as a little unprofessional. If someone notices the 666 they might think you're trying to be a bit edgy by flirting with satanism, but almost no one would care. People applying for PhDs tend to be young, and thus have a decent excuse for appearing a bit unprofessional and/or trying to be edgy.
Upvotes: 6 <issue_comment>username_2: While I agree with [the answer of username_1](https://academia.stackexchange.com/a/191509/75368) that this shouldn't be a problem, there is a technological fix.
It is probably easy for you to create a new email address, say at Gmail, and forward everything from either of them to the other, as you choose.
FWIW, there are other numbers with negative interpretations, 88 for example. And some email systems generate addresses with numeric elements as well.
Applying from an academic institution's email address is sometimes advantageous (minimally) and you might be able to get one from your current institution for purposes of application. You will probably get a new one from any institution you join in the future.
If you are a member of a professional society in your field then they might also make it possible to use an email address with the association name: <EMAIL>, say (UnderwaterBasketWeavers).
Upvotes: 5 <issue_comment>username_3: I suggest for applications going forward, change. Anything that distracts from your merits as a student should not be present on your application. While the odds are low that anyone will even see it, getting a new email address is fast and easy.
OTOH, if you already sent them in, don't worry about it.
Upvotes: 4 <issue_comment>username_4: While you may not be flat-out rejected for using '666', it is possible it may be interpreted differently from what you fear. Namely, without knowing you or the reason behind your choice, some people might think you as "cheeky", as some kid trying to be "a bad kid", similar to the style of some heavy metal bands. Such bias might have some negative effect; for example, while they will not avoid talking to you when you talk to them, someone might avoid initiating contact, or might avoid joint work, etc.
Upvotes: 5 <issue_comment>username_5: A side aspect not mentioned so far but very important is the remainder of the email address.
There is a huge difference in perception between the email `<EMAIL>` vs. `<EMAIL>`. The first would most likely be considered unprofessional, whereas the second would definitely not.
Based on my experience, any email address not containing your name, some abbreviation of your name, or being from a professional (e.g., university/personal) domain could easily be perceived as unprofessional, i.e., seen as your spam/childhood/teeny email address. However, perception might vary vastly depending on the professional area and their culture. I.e., computer scientists would most likely not care if the email itself is sufficiently professional, whereas law professionals might have an entirely different attitude. Given that you say math I would suspect a reaction closer to computer science than to law.
Upvotes: 5 <issue_comment>username_6: Yeah no... This isn't going to be seen as controversial by any respectable institution.
In Europe, it's entirely a non-issue since society is basically secular.
Even in the the United States, you should be fine as academics tend not to be religious fanatics. If you do have any issues because of it, you probably wouldn't have wanted to work in those institutions anyway.
It's also worth adding that discriminating against someone on the basis of them potentially being a Satanist would be massively illegal in basically every western nation.
Upvotes: -1 <issue_comment>username_7: It's like a first date with someone you really like. Do you wear your old ratty shoes that she would be fine with if you were dating? Or do you want to wear your nicer shoes on your first date to make a good first impression? Most people would wear their nicer shoes.
It's not a big deal, but say if even 1 in 200 applications get rejected for it, do you want to lose that chance? I wouldn't. I would have a slightly more professional email address to use.
BTW this wouldn't be offensive to anyone in academia, but it's an "edgy" thing that a teenager might put in their gamer tag. If it affects your application it will be because it will give an appearance of silliness and immaturity when you are applying for something very serious.
Upvotes: 3 <issue_comment>username_8: You are not likely to have a problem. Western countries are on average less religious when it comes to observance despite being of christian heritage .Along with this they are generally less superstitious.
Upvotes: -1 <issue_comment>username_9: From the general mood and trends of comments and answers, it seems to that the majority of people in Academia:SE never had to deal with the pious, white collar America.
In short: as long as you are dealing with academics, whether you have 666 or Satan or Iblis or Shaytan [or Mara](https://en.wikipedia.org/wiki/Mara_(demon)) in your email, you will be fine. As soon as your email is screened by someone non-academics (for example: secretary office of some full professor, secretary office of some head of department), in some "burocratic" functions, your email is most likely already involuntarily screened out by the spam filter set up for the institution (may apply also to academics) or voluntarily sent to the spam folder.
Email was born as an informal mean of comunications for nerds, it became a super formal way to reach out to people. Please remember that whatever you write in an email is, in general, forever reachable and readable by everyone ([think of an email as a postcard])[2](https://home.cern/news/news/cern/data-privacy-emails-postcards) .
The western world expects the email to be in the format
>
> <EMAIL>/.org/.edu
>
>
> <EMAIL>/.org/.edu .
>
>
>
Anything else is likely to be filtered out by AI-powered algorithms or pious-powered secretaries.
Good luck!
Upvotes: -1 <issue_comment>username_10: If *they* think *you* thought it was amusing to choose an email address with 666 in it, then yes, it could work against you. They won't have the fact that it's considered auspicious in some Asian cultures uppermost in their minds.
If on the other hand it's clear that's it's an institutional address and you just happen to have been allotted the number 666, without you having any say in the matter, that's different. I don't think that would affect how they consider your application one way or the other. (In Asia, though, it would probably be to your benefit.)
Upvotes: 2 <issue_comment>username_11: All of these comments in my opinion dancing around the issue. If you feel for one second this may have an impact on your potential admission to these Universities, go with your GUT and just make a new email solely for applying for these things. It takes two seconds to do and all of your doubt/fear is eliminated instantly. With respect to the ones you have sent though, It could have an impact and it could not have an impact. While we would all hope to live in a perfect world where these things do not matter, we don't, and especially in the United States, people will kill/lie/cheat/ect for their religion. It is just safer to make a new one, but don't worry about the ones you have already sent out.
Upvotes: 2
|
2022/12/12
| 1,866
| 8,180
|
<issue_start>username_0: I've completed 2 years of my PhD and now have a few ideas which I would really like to work on and publish, but though they are novel, their scope is quite limited (i.e. they aren't ground-breaking). I don't feel confident sending them to Q1 journals and it'll likely just be a waste of time. I want to nail down a couple of journal papers quickly because I'm starting to panic over my lack of publications (only one conference paper so far).
I do think they would easily get into Q2 journals and I want to at least publish them there so that they don't get taken and so I can point towards them during my examination in the event that a future submission to a Q1 is still under review by then. However, my advisor essentially prevents me from doing this and wants me to dedicate my time wholly and completely towards work that would contribute towards Q1 publications (which I don't yet even have an idea for!). Given the low acceptance rates, getting a paper into a Q1 journal even with a good idea can sometimes come down to luck because it would be up against other researchers who have worked decades in the field. But his reasoning for this is that that if the research isn't published in a Q1 journal, then it is not PhD worthy research and will actually harm me in my PhD assessment. Is this true? What should I do?<issue_comment>username_1: I'm not sure I understand the "examination" that you need to pass, so this is a bit more general advice.
Since your concerns seem to be local ones, getting through the exam and writing an acceptable dissertation, your advisor is probably the best source of information about how to do that efficiently and effectively.
My advice, therefore, is to follow that advice and work toward the main goal. I think the advisor has a duty, however, to prime the pump a bit with ideas that will lead to success.
It is possible that those "ideas" might be more significant than you currently judge. A discussion with your advisor might clarify that or lead to other things that would be better to pursue.
The "ideas" you have don't need to be abandoned, however. Do a minimal write up and file them away. When at some point you have the time, either after finishing the degree or when you are stuck on main-line problems, bring them out for review and possible advancement. Yes, you might get scooped on some of them, but you've already judged them as less significant. But a file drawer of ideas to be explored in future is a good thing to have.
Keep the main goal foremost in mind.
Upvotes: 2 <issue_comment>username_2: In my field, some Q1 journals are very prominent, and some big names are in Q2. I struggle to imagine the scenario where publishing in Q2 would actively harm a PhD assessment, and suspect your supervisor's motives are a little more nefarious than that: *their* evaluation may be disproportionately skewed towards Q1 publications. I know it is certainly the case here, where a Q1 publication is "worth" basically twice the points, so an incentive to push it a just a little and squeeze a "Q2-worthy" publication into Q1 is significant.
I find this numbers game outright perverse, but alas, it seems to be the norm rather than exception in modern academia. I think in your situation it is perfectly sensible to do as good of a work as you can, possibly send it to high-ranking journals - why not? - and if your supervisor wants a Q1 publication out of it, they should contribute accordingly. You have planned and performed the research, but then an incessant stream of nagging you for improvements starts? Sorry, but this was mismanaged, please contribute earlier. Sure, take in some of the suggestions, the research should be flexible, after all, but planning should still be functional to an extent.
Upvotes: 1 <issue_comment>username_3: Firstly, you are correct to observe that there is an idea in academia that publishing a paper in a low-impact journal could be worse than not publishing a paper at all. This idea does not dominate academia, but it is not marginal either --- some academics hold to this idea and it certainly influences the profession in some areas/institutions. There are some legitimate reasons to focus on publication in high-impact journals and there are also some reasons that stem from perverse ways that academia presently functions --- here are some things to consider:
* **High-impact is good if you can get it:** Most academics agree that you should at least try to target high impact journals as a first preference, with whatever research you are doing. Ceteris paribus, publishing a paper in a higher-impact journal rather than a lower-impact journal will tend to give it more visibility in the profession and it may lead to more impact for your research. Sometimes academics submit a paper initially to a high-impact journal (often with a low acceptance rate) and then work their way down. Not all academics do this, but some do.
* **Your supervisor probably has a *rough* idea of what is going to pass:** Your PhD dissertation will usually be assessed by referees *outside your university*, so typically an internal advisor can make an educated guess at what those referees will find to be sufficient for the award, but they don't know for sure. Your advisor may have enough experience in the field to have a good understanding of typical referee standards, but there is a lot of variation and unpredictability here. If research is sufficient to be published in a high-impact journal then it is probably of reasonable quality and in an area that is of substantial interest to the profession at that time. You should be aware that publication in the high-impact journals sometimes depends on pursuing the "hot topic" of the moment, but it can also be correlated with quality. For matters like this, follow the advice of your advisor, but note that there is a fair bit of unpredictability in how referees will assess a paper/dissertation.
* **There is an opportunity cost for low-impact research:** A reasonable argument against pursuing research projects that will result in papers in low-impact journals is that this has a time-cost and therefore a resulting opportunity cost --- i.e., time used for that research could instead be directed towards pursuing a research project that is likely to result in a paper in a high-impact journal. This appears to be part of the concern that your supervisor is expressing to you. While this is a reasonable observation, I would caution against pursuing research projects primarily based on their likely impact (as opposed to interest in the topic). The physicist <NAME> famously did some of his best work when he gave up on trying to write high-impact papers and focused his efforts on pursuing problems that were interesting to him without concern for their impact (though in his case these pet projects turned out to have lots of impact). Jump through this hoop during your PhD candidature if you need to, but in the long-term you should aim to establish a research field *that is of substantial interest to you* and let the research product fall where it may.
* **Certain (irrational) assessment methods in academia militate against low-impact publications:** There is still an unfortunate tendency in academia to use certain irrational metrics that punish academics for publishing low impact papers (as compared to publishing nothing). This occurs whenever universities use metrics that look at *average* citations per paper, *average* impact per paper, etc. These are ridiculous metrics to use and they fail even the most basic rationality desiderata for productivity. In particular, they treat a low-impact publication as being worse than sitting on your arse producing nothing, which creates serious perverse incentives. Nevertheless, you will find that some universities still assess applicants and academics using these kinds of ridiculous metrics. Consequently, the idea that doing nothing is better than publishing a low-impact paper, while seriously perverse, is a professional reality in some areas. (Hopefully in time the universities will learn not to do this.)
Upvotes: 3 [selected_answer]
|
2022/12/12
| 4,366
| 18,512
|
<issue_start>username_0: *There is a negative attitude towards volunteering in academia.*
I am about to graduate (currently an undergraduate student). There is a 6-month gap between my graduation date and the matriculation date at another university. I have planned many things to do during the period (to be productive). One of them is to become an unofficial teaching assistant.
I will no longer be a student at the current university. For this reason, it will not be possible for me get a job as an official teaching assistant. However, I have no intention of taking TA jobs away from other potential candidates. Instead, I will only target some courses that 1. do not need TAs 2. need some improvements in the teaching materials. (e.g. rewrite the code in another programming language which is consistent with the degree program; provide new toy examples). I will not do any teaching session. I also have a good relationship with the instructors of these courses. One of the courses had an unofficial TA this year.
I want to do it just for my own academic interest. Is that an ethical decision?
*PS: TA is probably not the right term, but for the question you can take it as TA*<issue_comment>username_1: Is it ethical to volunteer at your local Subway to teach them how to make better sandwiches?
**No**, you shouldn't do this. Teaching and research can be noble pursuits, but we don't live in a money-free society. The university offers courses in exchange for money - either tuition or tax dollars, doesn't matter - and if the university needs labor to improve them, it should pay for it.
Volunteering should flow *down* from you, as username_4 suggests, to something like [SPCA](https://en.wikipedia.org/wiki/Society_for_the_Prevention_of_Cruelty_to_Animals), which helps animals in need, or perhaps to disadvantaged students. It *should not* flow *up* from workers to things like universities, which are institutions (perhaps of the state), where people like professors enjoy high status. (Or at the very least, sideways, to members of your community, like at a local library, like username_2 [suggests](https://academia.stackexchange.com/a/191520).)
This only encourages universities to keep sucking up labor either for free or below market rate.
Perhaps there is a "teaching specialist" role you could be hired in, especially since it sounds like you aren't looking for a full-time job doing this.
Upvotes: 3 <issue_comment>username_2: While I disagree with [the answer of username_1](https://academia.stackexchange.com/a/191519/75368) I won't vote against it as I realize opinions can differ. However:
Under certain circumstances, yes, you can volunteer your services. One requirement I'd have is that the college/university be a non-profit. The second being that you gain *something of value* for whatever you do, though it need not be money.
I don't think that every interaction in life needs to be transactional, and especially not a financial transaction. That seems to me to be mentally and morally impoverishing. I'm happy to volunteer to speak to students without pay (retired now). My spouse volunteers weekly at our community library. Like a college the library isn't there to make money for anyone, but to provide a valuable service while balancing costs against revenues.
If you learn something from the experience it would be of value. If you support a long term relationship with a professor or two, it would be of value. If you have fun while waiting for other opportunities, it might be of value.
But a relationship as a TA might be out of the question. There are many reasons, one of which is that if you have contact with students then the university is potentially at risk of malpractice suits if you mess up and so need a formal relationship with those who do it. But aiding a professor on course materials might be open and you might learn something. You might be able to observe their teaching, giving you insight into whether you want to do that as a career.
I doubt that a university would use you as an excuse to deny someone else a paid position. Your contribution is small enough that it has little impact on budget or employment. Professionals in many fields come to speak to students without pay. That enhances the education and doesn't seem to deny employment to others.
One can be generous in life. That isn't an issue. But one should seek opportunities in which they get some value even if it is only satisfaction. No one pays me here for any contributions I try to make and might occasionally succeed at.
Upvotes: 2 <issue_comment>username_3: I believe volunteering is ethical for both the person doing the volunteering and person/organization "receiving" that volunteer work when:
1. No actual or hypothetical paid position is affected
2. The volunteer gets a benefit equal or greater than what they put in
I believe these principles apply even if the volunteer work is for a private company or powerful university. Volunteer work for other causes can also be ethical even when these principles are violated, though.
You write:
>
> I have no intention of taking TA jobs away from other potential candidates. Instead, I will only target some courses that 1. do not need TAs
>
>
>
which appears to satisfy #1, though I think some would be reasonably concerned about devaluing TA labor more generally - this is not something easily measured, and I think you want to consider the "hypothetical" job and not just the actual one.
I think you'll have to make a determination on #2, it seems, though, that you do see some benefit, so you'll have to weigh that against the costs.
My recommendation would be to approach this not as looking for a volunteer teaching position, but asking for some mentorship in developing your own teaching resources. You'll need to figure out what exactly you are *asking* for in this relationship; what it seems you will be providing is access to the materials you produce. It'll be up to the individual instructors to decide whether their time is worth what you provide. I'd suggest that if they spend as much time mentoring you as they would spend if they created similar materials on their own (not necessarily equal to the amount of time it takes *you*), that seems ethical to me. If they're going to spend almost no time mentoring and get some teaching materials for free, well, I'd suggest you're undervaluing your own time and should consider how to better balance the relationship.
Upvotes: 5 [selected_answer]<issue_comment>username_4: **TLDR: It is ethical by default**, unless someone gives a compelling reason why it is unethical (and I have not seen such a reason being put forward so far).
**Detailed analysis:**
Your proposed volunteering has the obvious potential to do some good for the university where you’ll be volunteering and for its students. It will also benefit you by giving you useful experience and tbe satisfaction of doing good.
The argument that the volunteering is unethical, as explained in @AzorAhai’s answer, relies on a belief that your proposed action can also do *harm* by devaluing the labor of paid workers and creating a temptation for universities to rely on volunteer labor. In a context in which many university workers already feel exploited, this argument has significant emotional resonance with a lot of people, which is why, as your question states at its outset, “there is a negative attitude towards volunteering in academia.”
Now, even if this argument has some merit, it's important to keep in mind that that does not automatically mean the action is unethical. Many ethical actions have a mixture of good and bad consequences. E.g., when we get on a plane, we contribute to climate change — does that mean flying is unethical? Surely that's not always the case. One needs to weigh the benefits against the harm, and factor in other contextual factors.
In your particular case, the benefits are very tangible whereas the harm is intangible and very difficult to quantify. The strong reaction from @AzorAhai is in my opinion more indicative of the general sentiment of many people in academia that they are exploited than of a cold, rational analysis. It may well be true that there are vulnerable groups of workers in academia that are exploited or mistreated, but the relevance of that fact to your situation is unclear and potentially nonexistent.
I am therefore inclined to agree with @BryanKrause's analysis. In the absence of any specific information that your volunteer work will displace some paid worker, your action should be viewed as ethical.
Upvotes: 3 <issue_comment>username_5: I'm going to give the (slightly-simplified) perspective of a neoclassical economist on this one:
* **Every labour-supply curve includes natural variation in individual supply:** In any labour market, there is going to be variation in the amount of money that different people want to do a job, and the overall "supply curve" for labour in an industry/job is determined by aggregating this over all people who are willing to work in that industry/job. A person who is willing to "volunteer" (i.e., work for zero wages) will shift the supply curve and reduce the price of labour, but this is not fundamentally different from any person coming into the market who is willing to work the same job for less money than any other person.
* **Every person who is willing to work for less depresses wages, but that is perfectly normal:** *Ceteris paribus*, if a person comes into an industry willing (and able) to do the same job for less money, this shifts the supply curve and reduces wages. As a common example, this occurs every time there is substantial immigration of poorer people into a wealthy country; the incoming migrants are usually willing to work for less than the domestic population, and this depresses wages in industries they move into. This phenomenon regularly leads to objections that incoming migrants are "taking our jobs!" and controversy over this has been a regular feature of industrial action for generations. (For a rather uncharitable ---but funny--- take on this, the boys at South Park have [helpfully summarised this objection](https://www.youtube.com/watch?v=APo2p4-WXsc)).
* **Adaptation of the labour market to new entrants is economically efficient:** From an economic perspective, and with some stipulations and caveats attached (that I will not go into here), this is a natural part of the supply-demand mechanism and it is economically efficient for workers willing to do the same job (at the same quality) for less to drive out those who are unwilling to work for less. One can certainly sympathise with those who find that wages in their industry drop, but the overall social effect of adding willing workers is usually positive, particularly in the long-term.
* **Based on this, there is no ethical problem with volunteering:** If you take the above view of the labour market, essentially based on neoclassical economic theory, then you will conclude that the addition of a willing worker (at zero or low wages) is a net positive for society, even if it may cause some temporary dislocation of other workers. This removes the only major ethical objection to volunteering that I am aware of, so I would conclude that there is no ethical problem with volunteering.
* **A slight caveat --- young people are sometimes naïve about the workforce:** As a slight caveat to the above, I would say that young people who have not had long periods of experience in the workforce may tend to overestimate the value of volunteer work to their career and this overestimation can allow institutions to "take advantage" of the free labour provided by young volunteers. Older people tend to be pretty averse to working for free for institutions that profit from their labours, unless they hold the social value of those institutions in very high regard. It is possible that a systematic bias towards overestimation of the value of volunteering (by younger people) could lead to some overall inefficiency in the labour market, as the realised value of volunteering falls short of expectations. That probably doesn't rise to the level of an ethical problem on the part of the worker, though it may give rise to an ethical issue on the part of the institutions that seek them out for employment.
Upvotes: 0 <issue_comment>username_6: There is a surprising lack of ethics in these answers to a question about ethics! Is X action ethical is always a quagmire, and it is nearly impossible to separate it from your personal values. We can use different ethical frameworks to examine your actions, but ultimately the impact of those frameworks depends on what you already see as good or bad.
Ethics
------
First off, from an official university perspective, I cannot fathom any way that you would be violating a student/employee/researcher code of ethics by volunteering your time to develop course materials. If that is what you're asking, then consider yourself vindicated. Go volunteer with a clean conscience! If you're interested in actual ethics then read on.
Aristotle's virtue ethics usually considers virtue to be the mean attribute that lies between two extremes (vices). By practicing virtuous actions and emulating people we know to be virtuous, we cultivate our own virtues. Volunteering is usually considered a generous action, and generosity is often recognized as a virtue that lies between the extremes of selfishness (a nothing-is-free mentality) and being prodigal (a sort of naive mentality of giving everything away to anyone who asks). I think this is what the very good answer by [username_1](https://academia.stackexchange.com/a/191519/68004) is getting at: is giving your time and energy to the university a worthwhile endeavor? Are you giving foolishly, or wisely? This is really a question that only you can answer for yourself. [username_1](https://academia.stackexchange.com/a/191519/68004) provides some arguments for people who see this as a foolish (and therefore not virtuous) use of your time and energy.
On the other hand, we could examine the problem from Kant's categorical imperative: if you could make a rule that describes your behaviour, would applying that rule to everyone else result in a world you would consider better or worse. (This is a simplification, please do not crucify me in the comments for misrepresenting Kant). How would we formulate a rule for this behaviour? Perhaps "Students who want to improve their teaching portfolios and build an academic interest in the material of a course must volunteer their TA services for free." Almost all grad students want to improve their teaching portfolios (if they continue in academia), and many are interested in the course material that they teach as TAs. So then most TAs should be working for free. Does that result in a better world or a worse world? Or "Students who can afford to not be paid must volunteer their TA services for free." Why would I hire a poor student to do this work when a rich student will do it for free? Better world or worse world? Of course this depends on many other factors: are graduate students relying on TA money for their living expenses at your institution? I cannot answer these questions for you. The crux of Kant's ethical viewpoint is that maybe one person acting this way has a minor impact on the university, the other TAs, and/or society as a whole, but if everyone acts this way then there could be serious consequences. Kant invites us to consider these consequences in a structured way.
The final ethical framework in the trio of frameworks taught to undergraduate students is Utilitarianism. This framework looks at whether the action you take maximizes the good results (or sometimes minimizes the bad, depending on the flavour of Utilitarianism). You volunteering your time seems like it would make you happy and satisfied (good!), it would probably make the instructor that you work for happy and satisfied (good!), it would probably make the university administrators nominally satisfied that they don't have to pay anyone (good!). Maybe your volunteering has a minor contribution to depressing TA wages ([see username_5's answer](https://academia.stackexchange.com/a/191556/68004)) (bad?). Does the good outweigh the bad? How do we measure that? Again, only you can answer this question for yourself based on your personal morals regarding what is good or bad.
TA Unions and Conflicts of Interest
-----------------------------------
Something else that I am not seeing much in other answers is the teaching assistant (TA) labour legal situation at your institution. At my institution, TA positions, lecturer positions, and non-faculty course instructor positions are unionized. What is or isn't considered unionized work is very clearly defined. Developing and improving course materials is considered either TA work or course instructor work at my institution, though this will vary from place to place. In some departments at my institution, TA work is a mandatory part of the graduate student funding package (graduate students are required to do TA work if they want to receive the full student salary from their department). By volunteering to perform this work for free you personally are not necessarily doing anything wrong, but the university would be opened to grievances (legal action) against them by the union by accepting your labour for free.
This is the *de jure* reality at my institution. However, the *de facto* nature of TA work often sees TAs working longer hours than specified in their contracts, or performing extra unpaid duties. Many TAs are graduate students working for professors in their department. Maintaining good relations with these professors is often crucial for the future academic success of these students, especially if they are working as a TA for their research supervisor. Many professors at my institution expect TAs to put in a little off-the-books work when they demand it. A TA who is by-the-book, with a that's-not-in-my-contract mindset will likely sour these relationships. Note that unlike an independent contractor-client relationship, TAs don't get to pick what professors they work for at my institution. They do get to indicate preference, but ultimately they will be assigned wherever TAs are most needed. As a result, whether intentional or unintentional on the part of the professors, TAs can feel coerced into "volunteering."
Upvotes: 2
|
2022/12/12
| 1,638
| 7,110
|
<issue_start>username_0: I am first author on a perspective paper being reviewed. The last authors are well known and comfortable discussing the future of certain technologies that can be seen as exciting but also provocative/delicate/controversial.
Our paper has about 20 co-authors. We wrote the paper very conservatively, only discussing the less delicate aspects of the field for the short-term. At one point, we did add a direct mention of the long-term more provocative possibilities (using explicit terms) - but then a small number of co-authors expressed concern, stating that if we explicitly stated these more long-term provocative directions, we could be "put on a list" by certain funding groups. This is because certain data we present, when coupled with explicit terms, can make the paper all the more provocative.
I always felt uneasy with stating the explicit long-term direction our work could go. So, I was relieved when the co-authors stated this - because we then removed the most controversial sections. I now *strongly* prefer to avoid explicitly stating controversial terms and long-term directions.
I could sense *most* co-authors, though, were disappointed at the removal of the more explicitly-stated bold future directions in our perspective piece. Only myself and two others seemed more conservative. Some co-authors even tried to edit back in softer mention of these bolder terms. Our current version does not explicitly mention provocative terms.
I am concerned:
1. Peer reviewers may ask us to add in the explicit terms as future directions. One co-author told me this is likely because to not mention bold terms could make us look uninformed or 'beating around the bush' about where this field could lead.
2. Our paper will be sensationalized in the media. Our last authors have said they think this paper will almost certainly make it to the media. I think they plan on contacting various media outlets. They are comfortable with the explicit terms (and have been publicly, which is why our paper, by virtue of being connected with them, softly implies the same). This is my biggest concern - having my name (as first author) connected to sensationalized headlines that make our paper sound extremely provocative.
I wanted to decrease the chance of these situations happening. Especially point 2. I plan to have a conversation with the last authors and ask them to avoid using those explicit terms when contacting the media. I am not sure if they will remember (due to being very busy), if they will comply in the end (especially as co-authors tried to edit back in explicit terms), and if they will state it clearly enough to media writers.
**Has anyone ever dealt with such a situation before? Are there any other preventative measures I can take? Such as requesting that I can review media pieces before publication etc. or that my name simply not be mentioned at all?**
(As a side note: I do plan to switch fields after this. I am on great terms with all my co-authors and plan to stay that way! But, I am switching into a field where there are fewer ethical complexities).<issue_comment>username_1: I doubt that there is anything you could do if you want the article to have any impact at all. Those who would sensationalize, and especially mischaracterize, research aren't especially rational and will do what they want. Look at what has happened to COVID related research, for example. Where would we be if that research hadn't taken place or people were timid about publishing it?
Write the paper so that it adheres to academic/scientific best practice. Let the trolls troll. They're going to in any case.
Upvotes: 1 <issue_comment>username_2: To the extent that your dilemma involves a controversial topic, diverging opinions between authors, and different appetites for sensationalism, I have experienced something similar. This formed the subject of a [previous question](https://academia.stackexchange.com/questions/120839/writing-homework-essay-anonymously-to-avoid-controversy/189729#189729) on this site.
However, the stakes were much lower because it was a limited publication and there was no media involvement.
Nevertheless, I believe some similarities are relevant. Your reluctance to draw attention to these perspectives seems to arise from:
(i) possible personal ethical concerns,
(ii) impact on future funding/collaborations.
You are the first author, therefore you do have a larger say in the final submission (I'm assuming this is not an alphabetically ordered list). If your ethical concerns are strong enough, you should absolutely state them to your co-authors, and not give their disappointment too much credence. Scientific disagreement can and should be resolved through discussion; you shouldn't need to capitulate. It doesn't sound like the disagreements are critical- but if they are, then the last option would be for one of the parties to withdraw from the work. It is very unlikely that this would be required though.
On the other hand, if your discomfort arises mainly from funding limitations, I suggest you disregard these. As the contours of technology change, a lot of motivations and directions change- this includes funding agencies as well. It is not a good idea to make long-term decisions based on their current disposition, especially in a field that is expected to see disruption.
Finally, to the specific question on media; there are two things to consider. Either the media pick the article up themselves, or one of your co-authors actively approaches them. To avoid the first, you could consider using a restrained, non-provocative title (or one that would provoke your academic readership, but not the general public). In the latter case, I'm sure you could ask the co-authors to be sensitive to your concerns. Beyond that, there's nothing you can do. No media outlet is under any obligation to let you vet/review their publication. Its really beyond your control, and probably best to let it take its own course.
There comes a tide in the affairs of men, etc.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Here's how to decrease the chance of your work being sensationalized in the media.
You **make sure that the press release about your work is measured and written in a way that is unlikely to be sensationalized**. Make sure to not have any sentence in there that would make very controversial claims when taken out of context.
This will have a significant impact because most 'reports' about a big impact research paper more or less regurgitate what the press release said. Most of the time I see scientific articles being sensationalized it is *not* due to the newspaper journalists acting irresponsibly but because the official press release was written irresponsibly sensationalist.
Note, you may have to spend some political capital on this within your collaboration but as an author you have some moral rights, and it sounds like you have a good relationship with the other authors. Also, you might have to argue with the people in charge of press release with the university.
Upvotes: 2
|
2022/12/12
| 1,254
| 5,204
|
<issue_start>username_0: Let me begin with a short disclaimer: I am aware that similar questions have been asked here more than once. However, none of the answers seemed useful to me and hence, I decided to make my own post about it.
I'm 29 years old, having just finished my PhD in math and working as a PostDoc (fixed contract of 1 year). The focus of my research is on analysis in partial differential equations (nothing numerical though).
**Backstory:** Although I could never imagine myself doing something different than being a researcher, as soon as the last year of my PhD started, I began thinking of leaving academia. The reasons: lack of motivation/inspiration for research and overwhelming chase of permanent position (if you get any after all). It took me, of course, more than half a year to realize that I am not a sort of "quitter" and I just simply changed my mind. For anyone wondering why did I continued as a PostDoc, the answer is "because it was the only job I could find at the moment". It was an offer that came out of the blue and I mostly grabbed it 'cause I would be unemployed otherwise. However, in the back of my mind, it was also me giving a second chance to academia.
But no, unfortunately it does not work out. Teaching is something not only I feel good at but also enjoy sometimes. Yet, I would not like to remain "teaching assistant" for the rest of my life (bad salary and no growth expectations). On the other hand, research is something that bores me. I see no interest/motivation in studying new tools and techniques or proving another theorem in which we simply replaced the X assumption by Y. Cherry on top, I do not feel that excellent/pro in research and this makes me feel even worse every time I have to study something new and uninteresting. I do have some papers besides my thesis but I never felt as comfortable in research as I do in teaching.
**To the point:** I really feel that my time here is limited and that i have to jump off the boat (although it might be a bit late; I'm 29 with no working experience). My huge problem is that I do not know where to look at. Once, when I was 16, being a scientist inspired me so much that I never questioned any of my choices (bachelor,master,PhD all back to back). I was just doing what needed to be done in order to reach to my final destination. Now the destination is foggy and this is sad (if not devastating). I see a lot of mathematicians jumping to data science or software engineering but I am not quite sure that I would enjoy something like that. Moreover, besides some basics in Matlab and Python, my programming skills are not that great.
**Questions:**
1. Do you have any suggestions on how to get inspired/motivated again for a new career goal? I am not afraid of having to educate myself further as long as I am passionate about the goal.
2. Is there anything else, besides data science, programming and banking (which I find completely dull) for mathematicians with no previous working experience? Preferrably something challenging, with career growth and home office possibilities, that is is sexy and fun (as being a researcher was :P)
I apologize for the super long post but I thought a few more details would be important. Many many thanks in advance for your time. Any feedback is highly appreciated!<issue_comment>username_1: Transdisciplinary research needs researchers like you: PhD-prepared problem-solvers! While teaching may not be your calling, you may find that your unique approach to problem-solving would be ideal within an academic setting where you might focus on work being done within a cross-disciplinary research institute.
Upvotes: 2 <issue_comment>username_2: A mathematician, but not motivated to do research? (Assuming you are in the US...) There are teaching positions not requiring (or not emphasizing) research. Four-year colleges; two-year colleges; even some elite high schools may be happy to get a Ph.D. to teach advanced math.
Upvotes: 3 <issue_comment>username_3: I originally wrote most of what's below in comments, but on reflection it should probably be an answer:
Any interest in applied PDEs? They show up in parts of mechanical engineering, electrical engineering, geophysics, physical oceanography, atmospheric science, spatial population models, epidemiology, traffic models, biomedical engineering, signal/image processing, etc.
If you're potentially interested in a specific field/domain, go find out more about it. How does/might the field use PDEs (or mathematics more generally)? What could you contribute and what need is out there for that? Be prepared to learn some domain knowledge. Learn to speak their "language" and find out what problems they'd love to solve.
You mention being in Germany. Would the Netherlands be an option for you, perhaps at a University of Applied Sciences? (In Dutch, *hogeschool*, similar to *Fachhochschule* in Germany, but with differing job entry requirements.)
By the way, you might try reading parts of <https://80000hours.org/> (a career advice website) as food for thought. (Please note that I am not affiliated with 80,000 Hours in any way. Just suggesting something that might help.)
Upvotes: 0
|
2022/12/12
| 1,379
| 5,941
|
<issue_start>username_0: Do you think it's possible that in the future there will be programmes for PhD in mathematical physics all online?
As I see courses being offered online, I assume if academia is to progress, this is bound to occur.
I once graded people's HWs offline, and nowadays I am grading their work through my PC.
Though progress is rather slow in academia... :-)
I also took an exam through Zoom... what an experience!<issue_comment>username_1: Anything is possible, and the future has a long time span, so yes, it is possible in some version of the future. It is unlikely, though, on any timeline that might be useful for someone at the brink of doctoral study.
The experience with online education in general during the COVID crisis has not been very happy. I doubt that many people are "yearning" for that to continue. Of course, that is a result of their training and background. It could be different by the year 2424.
In principle, it might be easier for doctoral study than at other levels, but perhaps not so much for US degrees that normally depend on advanced coursework and comprehensive exams. So, a lot might need to change here. In other places it might be more possible if the entire program is research based and doesn't require lab work or experimentation. People do remote collaborative research in some fields, of course, and that could be extended to doctoral study.
Theoretical physics in general and mathematical physics is probably a decent candidate for such a program, if the kinks could be worked out and the necessary marshaling of remote sources. Lab based physics probably is not so easy. Einstein, I remember did quite well with only physical mail for communication (ah, those were the days).
But, a student might have to be content with less guidance under such a system. That would disadvantage a lot of students who need that timely guidance.
In the short term, if someone were able to get such a degree they might have problems in the job market since there is a lot of experiences they might have missed, such as an opportunity to teach. So, a mixed system might be worse than one with only online study.
---
Caveat: There might even be such a program now, though I wouldn't have a way to know. I would be skeptical of its quality, however, and would ask serious questions of the proposer if I had to evaluate such.
Upvotes: 2 <issue_comment>username_2: It depends what you mean by "on line."
Some classes have been video-recorded and those vids are available on line, often for free or at least very cheap. Probably it won't be the case that just consuming recorded information will ever get you a degree. At least, not in the next few years. There will need to be personal interaction with a prof and other co-students and other people.
There will very likely be more interaction done remotely. People were pushed strongly into due to COVID, and they got a taste for it. And various software products to assist with remote work have significantly improved.
In the other direction, it's expensive to provide office space and classrooms. Even for grad students that get stacked 4 or 6 or 8 to a room. And letting people access buildings requires services like security and maintenance and cleaning and so on. Universities would be glad not to have to pay for that.
Also, more and more of the reference material is available digitally. These days a text book or journal in digital is usually cheaper. And it's a lot easier to carry.
So remote work will probably be a big part of life in the future.
There will probably also be pressure for in-person at least some of the time. So far, the on-line stuff is still a narrow window, at least for setups that ordinary people can afford and accommodate. An in person interaction is still different in character.
A friend of a friend was telling me about a "remote conference room" setup. At the end of the conference table there is a very-big-screen high-def computer screen. It's like 8 to 10 feet long. And there are really good cameras at each end. It does compression and such and goes over regular WIFI internet. So you sit at the conference table and the entire end of the table looks like the other attendees are just at the other end of the room. The problem is, it's expensive hardware and you have to buy a subscription. Most uni students couldn't afford it.
The relationship with your prof and other people in the uni is an important part of an advanced degree. It's a good thing to have for an undergrad degree also. But for a PhD it's vital. You are expected to be a contributing member of the school. You are supposed to interact with other students, your prof and other profs, post-docs, and other people. It is possible to do this to some extent remotely, but it's a lot harder.
Just as one example, going out for lunch, or a beer, or whatever, was a big part of these interactions for me. Quite often the grad students would all head to the same bar on a Friday, and many good discussions were had. There's a bar in Toronto called The El Mocambo. During my PhD, the downstairs at the ElMo was where we'd hang. And between sets of some local band we'd argue about the latest homework sets, discuss what people were doing for research, and what we were going to do after we graduated. The current state of remote interactions could never have produced this kind of thing.
So I'm going to suppose that completely remote degrees will be quite rare, especially at the graduate degree level. At least for the near future. If the remote interaction tools are improved this could start to change.
Upvotes: 2 <issue_comment>username_3: Under the traditional PhD model e.g. in Germany, you can do a PhD without ever showing up at the university or talking to anyone at the university, except for the defense of the thesis.
So yes, it is possible.
Whether this is desirable is an entirely different question.
Upvotes: 2
|
2022/12/12
| 6,189
| 24,001
|
<issue_start>username_0: As a PhD student and a post-doc, I always liked being a TA, and I always considered teaching as a natural duty for grad students and faculty. In the end, universities are and should be mainly about transmitting knowledge to students. Sometimes, I even enjoyed grading and proctoring because it would give me a short break from a highly creative job (research) to a mechanical one, where I could partially switch my brain off.
While a minority of my colleagues would enjoy teaching as much as I did, the large majority would endlessly complain about their teaching duties, and boringly compare their teaching load to the lighter one that colleague X or Y got assigned by the department, shouting to the world how unfair that was.
I never understood why this is the case. In particular, I have two questions:
1. How can someone end up working in academia, if teaching is such a burden?
2. Why do people in academia (from grad student upwards) complain so much about their teaching load, often comparing it to that of their colleagues?<issue_comment>username_1: This will depend on the institution, country, etc. But at many/most research-intensive universities career progression is largely dependent on research productivity (grants, papers, etc). Teaching takes (a lot of) time away from research and since it is largely not used for promotion decisions is often seen as wasted time.
I enjoy teaching but I also want to keep my job and advance my career. It's a difficult balance and I found that I must carefully defend my research time lest teaching and administration completely take over. It is very easy to be envious of a colleague who has managed to get themselves a lighter teaching load (sometimes by being intentionally bad at teaching) who gets promoted more quickly than those who deliver a great learning experience for their students at the expense of their own research.
This is obviously a huge generalisation. There will be academics who are great at both teaching and research, institutions that do reward teaching, etc. However, this is essentially the way teaching has been viewed from my experience in academia.
Upvotes: 5 <issue_comment>username_2: 1. Many academics do not have much time to spend on research. As a PhD student or post-doc, your primary duty is research. In our case, we would like a 'break' from teaching and governance so that we can get back to our research.
2. In many universities, teaching is not valued as much as research. At my university, even if you teach well, it is very difficult to gain promotion to a high level.
Upvotes: 4 <issue_comment>username_3: As you say, complaining about teaching can be quite off-putting. But why do people do it? I have two short answers: incentives are not aligned, looking for a connection with others.
Promotion, school rank, and influence depend heavily on research. Teaching, while deeply satisfying and beneficial in a number of ways, typically does not lead to promotions. It also is important to being hired. For an anecdotal example, [this new PhD received 6x the clicks on their research than their teaching page](https://thelittledataset.com/2022/03/21/job-mkt/). There's no doubt some self-selection going on but it is illustrative.
As a result, Professors sometimes see teaching as something that inhibits their personal progress and career. There are great complementarities, however. Communication skills are critical for researchers, and repeated teaching of the same material gives new understanding.
On the other hand, sometimes this is a common way of 'talking shop', some people are just looking to connect through complaining. This is perhaps a human instinct, but certainly not best practice.
Upvotes: 3 <issue_comment>username_4: I'd start by considering that people complain about *everything*, especially to potentially sympathetic ears. When I was a line cook, I complained about cooking, as did all of my coworkers. My friends in software jobs complain about writing software, meetings about software, etc.
Similarly, academics will complain about the other parts of their work as well on the research and service sides (publishing and journals, peer review of their work, peer review of others work, grants, too many or not enough students, university administration, committees, etc).
While I agree with you that teaching is an important part of academia, it's often a *forced*/obligated part from the perspective of a professor's job. There simply aren't many jobs that are "research-only", so everyone who is in academia primarily to do research experiences teaching as an additional obligation. Even if they do actually *enjoy* teaching, it's still perceived and treated as an additional burden. Instructors may have a big influence on their individual students, but at top research institutions teaching doesn't typically get much "credit" in terms of hiring/promotion besides as a "checkmark", and doesn't have much influence in perception outside the institution. The most famous/influential/honored professors are labeled as such by their research output, not their teaching, and time spent teaching is time spent not researching.
If you put a child's favorite entree next to their favorite dessert, and instruct that they need to finish the entree in order to get the dessert, you're setting up a natural hierarchy that makes eating the entree a chore, even if it would be thoroughly enjoyed in another context.
Upvotes: 8 [selected_answer]<issue_comment>username_5: I agree with Bryan's observation that people have a tendency to vent about their work to sympathetic ears, even if they are complaining about fundamental aspects of their work. Nevertheless, there are some understandable reasons why academics tend to complain more about teaching than about research:
* **Time on teaching has an opportunity cost that impacts research (and career advancement):** Career advancement in academia is still primarily determined by research output, so time spent on teaching detracts from time available to pursue the research that will drive your career in the long-term. This is one of the primary reasons that academics complain about high teaching loads or inequity in the allocation of teaching loads across their school. Many universities are taking action to allow career advancement within a "teaching stream" but it is still easier to advance your career with research success. For those academics who are interested in climbing the professorial ladder, time spent teaching may be suboptimal and therefore excess time on this may induce complaints.
* **Teaching the same topic repeatedly can become less interesting over time:** This is not always the case, and it can even go the other way, but sometimes repetition of teaching the same topic over a long period of time (e.g., decades) can cause some *ennui*, even for academics who are passionate about their subject. Students tend to make the same types of errors again and again over different student cohorts, so much of teaching work is repetitive. (There are only so many times you can correct elementary calculus errors by students before some of the gloss wears off that activity, and similarly for most academic topics.) Academics sometimes move around courses to add some variation to their teaching, but even with this variation, teaching in a field over a long period of time can become less interesting over time.
* **"These young people are strange and scary to me":** Professors age, but their student cohorts stay the same age. As this age gap widens, the academic has [less in common with his/her students](https://www.youtube.com/watch?v=5DlTexEXxLQ), will (correctly) view them as more immature/child-like, and will be more removed from their culture, habits, ways of talking, thinking, acting, etc. In particular, younger students may have ways of behaving, thinking or talking that are off-putting or even irritating to some older academics, or which cause them to despair for the future of the human race. This can cause academics to find teaching less enjoyable as they get older, and it may also be a contributor to greater focus on research.
Upvotes: 5 <issue_comment>username_6: Often teaching loads are way too high given the time allocated, and no matter how much you enjoy teaching and associated work (prep, grading, student support etc) you don't want to it to be eating into your home and family life.
So it's natural to complain when you see others being asked to do less than you. Also the more teaching you get the less well you will do any of it, and academics are kind of perfectionist as a rule.
Making fair workloads was one of the most difficult things I had to do as a senior academic, and you quickly realise when you start to add it all up that most of your staff are being asked to teach way more than they should be given all of the other activities that they are also judged on.
Upvotes: 3 <issue_comment>username_7: Others have pointed out that since the 1980s promotion within academia is largely about research output (and not necessarily *quality* output, sadly) and consultancy income rather than curriculum development or teaching excellence. This is even more the case in the science, technology, engineering & medicine departments due to the high laboratory overhead costs.
To this background I would add that the traditional synergy between teaching, which is essentially interacting perspectives on a subject with younger minds, and research - that is basic research (hitherto unexplored empirical investigation) or fundamental (rational reasoning from *a priori* laws of science) research but *not applied research* - seems to have been blown aside in the urgency to get academic budgets balanced from year to year.
A STEM department's budget demands so much external funding and that can only come from committing to research programmes overwhelmingly funded by state funded research councils and large corporations. The politicians want to see "results" for their expenditure lest the media echo public concern about neglect of spending on public health, primary and secondary education, housing, etc. The corporate executives want to obtain technological advantage so they enjoy sustained profits into the future. So academics end up doing research that is more obligatatory and less personally interesting with less time to complete the work. Their teaching hours are shoe-horned into a tight schedule. They are human enough to feel this is wrong on students but have to be careful what they complain about: orthodoxy to the university agenda counts for a lot in academic promotion.
That is the conundrum of STEM academia today.
I have no solution to offer on this budget-induced conundrum. But as to your own individual situation and its demoralizing effect on you, I would suggest that if you want to stay in academia it might be wiser to look at working in a liberal arts type college rather than a large university - and certainly not a "research-led" university. I feel that the human relations priority of many of these liberal arts colleges would tend to support a reset towards the mutually inspiring teaching/research relationship that you (and many more) seek. It might also yield a return to consultancy coming *to* the academic (e.g. the office phone ringing) rather than he/she actively hustling for it.
Upvotes: 2 <issue_comment>username_8: **Complaining can be tactical**
As many excellent answers above have noted, teaching, while it may be great, funds the institution, and is one of the vital goals of university existence, rarely helps with promotion, as opposed to research, which does. As a purely self interested lecturer, therefore, your goal is to do as much research and as little teaching as possible.
However, job assignments in academia are extremely arbitrary. Heads of department often divide up teaching load, and often have to ask staff to take on more teaching than they'd ideally be allocated. There's a huge disparity between teaching needed, and teaching that staff are willing to take on.
It therefore makes sense to, at all times, seem horrifyingly overworked. If you seem like your teaching load is fine (or, in many places I've worked, not literally killing you), you will be given more teaching. This hurts your research time. Academics that are good at managing this split get promoted, and so tend to be around for a while, still complaining about their teaching load, as a kind of pre-emptive defence at being given more of it.
Upvotes: 4 <issue_comment>username_9: An addition to the comments on rewards (status, career advancement, faculty positions) being better for research rather than teaching, I would expand a category that could be summarized as future Return-On-Investment (ROI) from research focus vs teaching focus.
* Your chance of becoming a celebrity / famous are better with a publication in Nature / Science / (Prestigious Journal of Choice) than they are from years of teaching students.
* Your chance of becoming wealthy from research grants (and secondary benefits) is greater.
* Your chance of having a successful spin-off company is greater.
* Your chance of lateraling into a lucrative/prestigious industry/government job is greater.
* Your chance of being chosen for an influential steering or advising committee is greater.
* Your chance of becoming "management" (dean, provost, faculty chair) are greater.
* Your chance of lateraling to a better (read: more prestigious) institution are greater.
* Your chance of having students compete to work with you is greater.
Thus, directly addressing the two questions.
1. How can someone end up working in academia, if teaching is such a burden?
* Short, that's not why they're in academia.
2. Why do people in academia (from grad student upwards) complain so much about their teaching load, often comparing it to that of their colleagues?
* Teaching load goes opposite almost all the ROI concepts that lead to what Americans value (celebrity, wealth, control)
**Edit 12/16/2022**
Responding to comments here because its too long for comments.
<NAME> - (A) "Nobody?" [Beyond the Labs: Pathways to Wealth for Academics](https://www.linkedin.com/pulse/beyond-labs-pathways-wealth-academics-olumuyiwa-oludayo/) and [Social Capital Ii: Determinants of Economic Connectedness](https://www.nature.com/articles/s41586-022-04997-3) "Note Figure 5b, top research schools" and [Deep Class and Wealth Divisions Between Faculty and Broader Society](https://www.highereddive.com/news/faculty-divisions-class-wealth-college-change/637058/) "PhDs are in significantly better zip codes, and their children start in better zip codes" (B) [Wealthier PIs Submit Way More Grants](https://drugmonkey.scientopia.org/2013/09/17/how-the-wealthier-pis-submit-way-more-grant-applications-than-you-could-ever-imagine/) and [The Rich and the Rest](https://grantome.com/blog/rich-rest) "Somebody's looking at grants as a path to wealth"
@username_4 [UC Faculty Salary Scale](https://www.ucop.edu/academic-personnel-programs/_files/2022-23/oct-2022-salary-scales/t1.pdf) vs [Dean Salary Scale](https://www.ucop.edu/academic-personnel-programs/compensation/deans-salary-structure/deans-salary-bands.html) or look at Higher-Ed [Executive Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=53) vs [Faculty Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=56) vs [Non-Tenure Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=57) Especially anything with the word "Research" in front of it. Computer and Information Sciences and Engineering show especially enormous disparities.
@Morgan - What are "most" academics trying to do? Do you have research or surveys to support your views on the career objectives of [12.5](https://ourworldindata.org/grapher/number-of-teachers-across-education-levels) [million](https://www.worldbank.org/en/topic/teachers) tertiary education faculty? How many are attempting to have a spin-off such as in [Urbana-Champagne](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/UIUC.png), [UC Davis](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/UC_Davis.png), or [All Sacremento](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/Sacramento.png). Perhaps the number who [advising](https://www.ycombinator.com/blog/how-to-spin-your-scientific-research-out-of-a-university-and-into-a-startup/), [documenting](https://www.researchgate.net/publication/284283369_Fifteen_Years_of_Academic_Entrepreneurship_in_Italy_Evidence_from_the_Taste_Project), or [promoting-1](https://edu.rsc.org/download?ac=509815) [2](https://www.cmu.edu/cttec/company-creation/index.html) spin-offs? Perhaps just ref [these](https://academia.stackexchange.com/questions/85910/looking-for-specific-examples-of-university-spin-offs-where-the-researchers-can) [stack](https://academia.stackexchange.com/questions/792/how-good-are-entrepreneurial-opportunities-for-faculty-members) [exchange](https://academia.stackexchange.com/questions/109326/while-in-phd-i-developed-a-lot-of-code-that-i-want-to-turn-into-start-up-can-i) [academia](https://academia.stackexchange.com/questions/77945/my-adviser-wants-to-use-my-code-for-a-spin-off-but-i-want-to-use-it-for-my-own) [answers](https://academia.stackexchange.com/questions/40208/professor-wants-to-expropriate-submitted-code-for-commercial-use) and [topic](https://academia.stackexchange.com/questions/tagged/intellectual-property?tab=Votes).
Upvotes: 0 <issue_comment>username_10: They are showing their true colors.
In reality, those faculty members are just people like anyone else. They do not have above average interest in helping others or the over-the-top altruism laid out in the university brochure.
They are interested in their next grant. Teaching simply interferes with their interests. Teaching only advances their respective careers as far is it is a requirement.
Upvotes: -1 <issue_comment>username_11: I'll answer both questions, through my own context.
1. How can someone end up working in academia, if teaching is such a burden?
Academia is more than just teaching and a lot of people have very strong misconceptions about academia, including the teaching part. Academia is also research, it is, to many people, a chance to take a topic that you have a deep, passionate interest in and really explore it. When people pursue it, their ultimate goals can be starkly different. Some want to teach students in the best way possible, some just dream of somehow getting tenure while teaching in a passable way, some dream of getting funding for their passion projects. None of these things are inherently wrong and not all academics need to be a glutton for teaching. There are some very smart people out there, who don't have the right mindset or skills for that part of the job but whose brilliant minds will surely bring prestige to the institution and a lot of newfound knowledge and observations to the field.
2. Why do people in academia (from grad student upwards) complain so much about their teaching load, often comparing it to that of their colleagues?
Because these loads are very often uneven or feel uneven, because sometimes you have a nightmare class or a nightmare student, because teaching while you also have to do a bunch of publications is... it's a lot. The thing is, in most jobs, as time passes and your career trends upward - your lot gets easier. In academia, that is not always the case. You may be a great teacher, but if you don't do the right publications, you end up handling everything yourself while someone with better people skills, a savvier approach to research, etc. will have assistants doing everything.
That said, teaching is, like it or not, an essential part of the experience and I think anyone who actively wishes to never do it again simply won't cut it in modern academia. Now, whether that's a problem with them or with the inherent structure of academia... that's a whole other question.
Upvotes: 0 <issue_comment>username_12: This answer will be STEM related.
In STEM, at least in my experience, all most all time prior to being a faculty member is spent doing research: I had never stood in front of a class of students until I was appointed to a faculty position. Thus, the people who make it through the hard years of PhD and postdoc are those that are committed to their research. But by the end of their time as a postdoc, the only option to stay in the research world is to get a faculty job.
Most enter a faculty job thinking that they will continue to do research, and spend a little time teaching the occasional class. They are initially excited about this, as they see the nobility in teaching the next generation.
But it turns out that most people at a faculty level are given teaching responsibilities that would require full-time work to do well. But they must also be successful at research to make tenure/get promoted/keep their team employed.
They are effectively expected to do two full time jobs. One of which they have spent 15 years training for and already know they love and the other which they may feel is important, but have zero training or experience in.Nobody tells them which bits of research they should or shouldn't be doing, where as the teaching is dictated and micromanaged. Given this is unsurprising that many come to dislike the teaching. Everyone likes to be good at the things they do, but being good at teaching takes a lot of time, time which they may feel they owe to their research (and in particular, their research team, who depends on the lab's success for their livelihood). And even then on the probabilities, they are unlikely to be as good at teaching as they are at research.To add insult to injury, university administration often tells them that something that a teaching load that is taking them 40-50 hours a week should only be taking them 20 hours.
I don't mind the actual hours with students, and I tend to think that if teaching was the only thing I had to do, I might actaully enjoy it (although probably not as much as research).
Upvotes: 2 <issue_comment>username_13: I will briefly give my two cents, as a person that actually likes teaching but complains a lot about it.
The reasons I complain are:
* **Overloaded with teaching**. My time should be split (according to my contract) 50-50 on teaching-research. Unfortunately, I find that I have to spend way more time on teaching/preparing new courses that the department randomly assigns to me.
* **Teaching things that do not interest me**. This goes without saying. Unfortunately, only few of us can teach what they really like. Most of the time, we tend to fill in holes that someone else left behind. Imagine if you are a number theorist and they ask you to teach continuous optimization on a business department.
* **Teaching on the wrong level**. Most people would prefer to teach something close to their expertise and this means naturally more advanced courses suitable for upper undergraduate/graduate level. Unfortunately, I will not get a lot of satisfaction if I am assigned to teach Calculus to 1st semester students.
* **Teaching the same thing over and over**. While for specialized courses close to your expertise it is easy to update them to include new material, not much can be changed in, for example, Calculus. The material and the way of teaching is pretty much the same and very likely to continue be so.
* **Not interesting audience**. It is different to teach calculus to math students and different to liberal arts students.
* Also, a valid concern is that indeed some people do get much better teaching schedule and that is a fact. You may ignore it, but it is also easy to invoke some envy feelings that lower your enthusiasm.
Maybe I am forgetting something here, but you get the idea. Enjoying teaching does come with conditions.
Upvotes: 3
|
2022/12/12
| 6,122
| 23,744
|
<issue_start>username_0: What makes a university prestigious?
Is it that they are harder and more rigorous? My intuition would tell me that the more prestigious a university is, the harder should be the curriculum and the fewer people should be able to graduate from it since the curriculum is so hard. I expected, for example, that Harvard should have a very low graduation rate since it is so prestigious and thus should be hard to study at. As far as I know, that does not seem to be the case in the US and a high graduation is supposed to be a good thing. Is that so? If so how do universities manage to maintain both quality and quantity of graduates?
All in all, I have always assumed that a big part of what makes a university prestigious is how hard it is to study there. Is that so in the US?<issue_comment>username_1: This will depend on the institution, country, etc. But at many/most research-intensive universities career progression is largely dependent on research productivity (grants, papers, etc). Teaching takes (a lot of) time away from research and since it is largely not used for promotion decisions is often seen as wasted time.
I enjoy teaching but I also want to keep my job and advance my career. It's a difficult balance and I found that I must carefully defend my research time lest teaching and administration completely take over. It is very easy to be envious of a colleague who has managed to get themselves a lighter teaching load (sometimes by being intentionally bad at teaching) who gets promoted more quickly than those who deliver a great learning experience for their students at the expense of their own research.
This is obviously a huge generalisation. There will be academics who are great at both teaching and research, institutions that do reward teaching, etc. However, this is essentially the way teaching has been viewed from my experience in academia.
Upvotes: 5 <issue_comment>username_2: 1. Many academics do not have much time to spend on research. As a PhD student or post-doc, your primary duty is research. In our case, we would like a 'break' from teaching and governance so that we can get back to our research.
2. In many universities, teaching is not valued as much as research. At my university, even if you teach well, it is very difficult to gain promotion to a high level.
Upvotes: 4 <issue_comment>username_3: As you say, complaining about teaching can be quite off-putting. But why do people do it? I have two short answers: incentives are not aligned, looking for a connection with others.
Promotion, school rank, and influence depend heavily on research. Teaching, while deeply satisfying and beneficial in a number of ways, typically does not lead to promotions. It also is important to being hired. For an anecdotal example, [this new PhD received 6x the clicks on their research than their teaching page](https://thelittledataset.com/2022/03/21/job-mkt/). There's no doubt some self-selection going on but it is illustrative.
As a result, Professors sometimes see teaching as something that inhibits their personal progress and career. There are great complementarities, however. Communication skills are critical for researchers, and repeated teaching of the same material gives new understanding.
On the other hand, sometimes this is a common way of 'talking shop', some people are just looking to connect through complaining. This is perhaps a human instinct, but certainly not best practice.
Upvotes: 3 <issue_comment>username_4: I'd start by considering that people complain about *everything*, especially to potentially sympathetic ears. When I was a line cook, I complained about cooking, as did all of my coworkers. My friends in software jobs complain about writing software, meetings about software, etc.
Similarly, academics will complain about the other parts of their work as well on the research and service sides (publishing and journals, peer review of their work, peer review of others work, grants, too many or not enough students, university administration, committees, etc).
While I agree with you that teaching is an important part of academia, it's often a *forced*/obligated part from the perspective of a professor's job. There simply aren't many jobs that are "research-only", so everyone who is in academia primarily to do research experiences teaching as an additional obligation. Even if they do actually *enjoy* teaching, it's still perceived and treated as an additional burden. Instructors may have a big influence on their individual students, but at top research institutions teaching doesn't typically get much "credit" in terms of hiring/promotion besides as a "checkmark", and doesn't have much influence in perception outside the institution. The most famous/influential/honored professors are labeled as such by their research output, not their teaching, and time spent teaching is time spent not researching.
If you put a child's favorite entree next to their favorite dessert, and instruct that they need to finish the entree in order to get the dessert, you're setting up a natural hierarchy that makes eating the entree a chore, even if it would be thoroughly enjoyed in another context.
Upvotes: 8 [selected_answer]<issue_comment>username_5: I agree with Bryan's observation that people have a tendency to vent about their work to sympathetic ears, even if they are complaining about fundamental aspects of their work. Nevertheless, there are some understandable reasons why academics tend to complain more about teaching than about research:
* **Time on teaching has an opportunity cost that impacts research (and career advancement):** Career advancement in academia is still primarily determined by research output, so time spent on teaching detracts from time available to pursue the research that will drive your career in the long-term. This is one of the primary reasons that academics complain about high teaching loads or inequity in the allocation of teaching loads across their school. Many universities are taking action to allow career advancement within a "teaching stream" but it is still easier to advance your career with research success. For those academics who are interested in climbing the professorial ladder, time spent teaching may be suboptimal and therefore excess time on this may induce complaints.
* **Teaching the same topic repeatedly can become less interesting over time:** This is not always the case, and it can even go the other way, but sometimes repetition of teaching the same topic over a long period of time (e.g., decades) can cause some *ennui*, even for academics who are passionate about their subject. Students tend to make the same types of errors again and again over different student cohorts, so much of teaching work is repetitive. (There are only so many times you can correct elementary calculus errors by students before some of the gloss wears off that activity, and similarly for most academic topics.) Academics sometimes move around courses to add some variation to their teaching, but even with this variation, teaching in a field over a long period of time can become less interesting over time.
* **"These young people are strange and scary to me":** Professors age, but their student cohorts stay the same age. As this age gap widens, the academic has [less in common with his/her students](https://www.youtube.com/watch?v=5DlTexEXxLQ), will (correctly) view them as more immature/child-like, and will be more removed from their culture, habits, ways of talking, thinking, acting, etc. In particular, younger students may have ways of behaving, thinking or talking that are off-putting or even irritating to some older academics, or which cause them to despair for the future of the human race. This can cause academics to find teaching less enjoyable as they get older, and it may also be a contributor to greater focus on research.
Upvotes: 5 <issue_comment>username_6: Often teaching loads are way too high given the time allocated, and no matter how much you enjoy teaching and associated work (prep, grading, student support etc) you don't want to it to be eating into your home and family life.
So it's natural to complain when you see others being asked to do less than you. Also the more teaching you get the less well you will do any of it, and academics are kind of perfectionist as a rule.
Making fair workloads was one of the most difficult things I had to do as a senior academic, and you quickly realise when you start to add it all up that most of your staff are being asked to teach way more than they should be given all of the other activities that they are also judged on.
Upvotes: 3 <issue_comment>username_7: Others have pointed out that since the 1980s promotion within academia is largely about research output (and not necessarily *quality* output, sadly) and consultancy income rather than curriculum development or teaching excellence. This is even more the case in the science, technology, engineering & medicine departments due to the high laboratory overhead costs.
To this background I would add that the traditional synergy between teaching, which is essentially interacting perspectives on a subject with younger minds, and research - that is basic research (hitherto unexplored empirical investigation) or fundamental (rational reasoning from *a priori* laws of science) research but *not applied research* - seems to have been blown aside in the urgency to get academic budgets balanced from year to year.
A STEM department's budget demands so much external funding and that can only come from committing to research programmes overwhelmingly funded by state funded research councils and large corporations. The politicians want to see "results" for their expenditure lest the media echo public concern about neglect of spending on public health, primary and secondary education, housing, etc. The corporate executives want to obtain technological advantage so they enjoy sustained profits into the future. So academics end up doing research that is more obligatatory and less personally interesting with less time to complete the work. Their teaching hours are shoe-horned into a tight schedule. They are human enough to feel this is wrong on students but have to be careful what they complain about: orthodoxy to the university agenda counts for a lot in academic promotion.
That is the conundrum of STEM academia today.
I have no solution to offer on this budget-induced conundrum. But as to your own individual situation and its demoralizing effect on you, I would suggest that if you want to stay in academia it might be wiser to look at working in a liberal arts type college rather than a large university - and certainly not a "research-led" university. I feel that the human relations priority of many of these liberal arts colleges would tend to support a reset towards the mutually inspiring teaching/research relationship that you (and many more) seek. It might also yield a return to consultancy coming *to* the academic (e.g. the office phone ringing) rather than he/she actively hustling for it.
Upvotes: 2 <issue_comment>username_8: **Complaining can be tactical**
As many excellent answers above have noted, teaching, while it may be great, funds the institution, and is one of the vital goals of university existence, rarely helps with promotion, as opposed to research, which does. As a purely self interested lecturer, therefore, your goal is to do as much research and as little teaching as possible.
However, job assignments in academia are extremely arbitrary. Heads of department often divide up teaching load, and often have to ask staff to take on more teaching than they'd ideally be allocated. There's a huge disparity between teaching needed, and teaching that staff are willing to take on.
It therefore makes sense to, at all times, seem horrifyingly overworked. If you seem like your teaching load is fine (or, in many places I've worked, not literally killing you), you will be given more teaching. This hurts your research time. Academics that are good at managing this split get promoted, and so tend to be around for a while, still complaining about their teaching load, as a kind of pre-emptive defence at being given more of it.
Upvotes: 4 <issue_comment>username_9: An addition to the comments on rewards (status, career advancement, faculty positions) being better for research rather than teaching, I would expand a category that could be summarized as future Return-On-Investment (ROI) from research focus vs teaching focus.
* Your chance of becoming a celebrity / famous are better with a publication in Nature / Science / (Prestigious Journal of Choice) than they are from years of teaching students.
* Your chance of becoming wealthy from research grants (and secondary benefits) is greater.
* Your chance of having a successful spin-off company is greater.
* Your chance of lateraling into a lucrative/prestigious industry/government job is greater.
* Your chance of being chosen for an influential steering or advising committee is greater.
* Your chance of becoming "management" (dean, provost, faculty chair) are greater.
* Your chance of lateraling to a better (read: more prestigious) institution are greater.
* Your chance of having students compete to work with you is greater.
Thus, directly addressing the two questions.
1. How can someone end up working in academia, if teaching is such a burden?
* Short, that's not why they're in academia.
2. Why do people in academia (from grad student upwards) complain so much about their teaching load, often comparing it to that of their colleagues?
* Teaching load goes opposite almost all the ROI concepts that lead to what Americans value (celebrity, wealth, control)
**Edit 12/16/2022**
Responding to comments here because its too long for comments.
<NAME> - (A) "Nobody?" [Beyond the Labs: Pathways to Wealth for Academics](https://www.linkedin.com/pulse/beyond-labs-pathways-wealth-academics-olumuyiwa-oludayo/) and [Social Capital Ii: Determinants of Economic Connectedness](https://www.nature.com/articles/s41586-022-04997-3) "Note Figure 5b, top research schools" and [Deep Class and Wealth Divisions Between Faculty and Broader Society](https://www.highereddive.com/news/faculty-divisions-class-wealth-college-change/637058/) "PhDs are in significantly better zip codes, and their children start in better zip codes" (B) [Wealthier PIs Submit Way More Grants](https://drugmonkey.scientopia.org/2013/09/17/how-the-wealthier-pis-submit-way-more-grant-applications-than-you-could-ever-imagine/) and [The Rich and the Rest](https://grantome.com/blog/rich-rest) "Somebody's looking at grants as a path to wealth"
@username_4 [UC Faculty Salary Scale](https://www.ucop.edu/academic-personnel-programs/_files/2022-23/oct-2022-salary-scales/t1.pdf) vs [Dean Salary Scale](https://www.ucop.edu/academic-personnel-programs/compensation/deans-salary-structure/deans-salary-bands.html) or look at Higher-Ed [Executive Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=53) vs [Faculty Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=56) vs [Non-Tenure Pay](https://www.higheredjobs.com/salary/salaryDisplay.cfm?SurveyID=57) Especially anything with the word "Research" in front of it. Computer and Information Sciences and Engineering show especially enormous disparities.
@Morgan - What are "most" academics trying to do? Do you have research or surveys to support your views on the career objectives of [12.5](https://ourworldindata.org/grapher/number-of-teachers-across-education-levels) [million](https://www.worldbank.org/en/topic/teachers) tertiary education faculty? How many are attempting to have a spin-off such as in [Urbana-Champagne](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/UIUC.png), [UC Davis](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/UC_Davis.png), or [All Sacremento](https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2017/09/Sacramento.png). Perhaps the number who [advising](https://www.ycombinator.com/blog/how-to-spin-your-scientific-research-out-of-a-university-and-into-a-startup/), [documenting](https://www.researchgate.net/publication/284283369_Fifteen_Years_of_Academic_Entrepreneurship_in_Italy_Evidence_from_the_Taste_Project), or [promoting-1](https://edu.rsc.org/download?ac=509815) [2](https://www.cmu.edu/cttec/company-creation/index.html) spin-offs? Perhaps just ref [these](https://academia.stackexchange.com/questions/85910/looking-for-specific-examples-of-university-spin-offs-where-the-researchers-can) [stack](https://academia.stackexchange.com/questions/792/how-good-are-entrepreneurial-opportunities-for-faculty-members) [exchange](https://academia.stackexchange.com/questions/109326/while-in-phd-i-developed-a-lot-of-code-that-i-want-to-turn-into-start-up-can-i) [academia](https://academia.stackexchange.com/questions/77945/my-adviser-wants-to-use-my-code-for-a-spin-off-but-i-want-to-use-it-for-my-own) [answers](https://academia.stackexchange.com/questions/40208/professor-wants-to-expropriate-submitted-code-for-commercial-use) and [topic](https://academia.stackexchange.com/questions/tagged/intellectual-property?tab=Votes).
Upvotes: 0 <issue_comment>username_10: They are showing their true colors.
In reality, those faculty members are just people like anyone else. They do not have above average interest in helping others or the over-the-top altruism laid out in the university brochure.
They are interested in their next grant. Teaching simply interferes with their interests. Teaching only advances their respective careers as far is it is a requirement.
Upvotes: -1 <issue_comment>username_11: I'll answer both questions, through my own context.
1. How can someone end up working in academia, if teaching is such a burden?
Academia is more than just teaching and a lot of people have very strong misconceptions about academia, including the teaching part. Academia is also research, it is, to many people, a chance to take a topic that you have a deep, passionate interest in and really explore it. When people pursue it, their ultimate goals can be starkly different. Some want to teach students in the best way possible, some just dream of somehow getting tenure while teaching in a passable way, some dream of getting funding for their passion projects. None of these things are inherently wrong and not all academics need to be a glutton for teaching. There are some very smart people out there, who don't have the right mindset or skills for that part of the job but whose brilliant minds will surely bring prestige to the institution and a lot of newfound knowledge and observations to the field.
2. Why do people in academia (from grad student upwards) complain so much about their teaching load, often comparing it to that of their colleagues?
Because these loads are very often uneven or feel uneven, because sometimes you have a nightmare class or a nightmare student, because teaching while you also have to do a bunch of publications is... it's a lot. The thing is, in most jobs, as time passes and your career trends upward - your lot gets easier. In academia, that is not always the case. You may be a great teacher, but if you don't do the right publications, you end up handling everything yourself while someone with better people skills, a savvier approach to research, etc. will have assistants doing everything.
That said, teaching is, like it or not, an essential part of the experience and I think anyone who actively wishes to never do it again simply won't cut it in modern academia. Now, whether that's a problem with them or with the inherent structure of academia... that's a whole other question.
Upvotes: 0 <issue_comment>username_12: This answer will be STEM related.
In STEM, at least in my experience, all most all time prior to being a faculty member is spent doing research: I had never stood in front of a class of students until I was appointed to a faculty position. Thus, the people who make it through the hard years of PhD and postdoc are those that are committed to their research. But by the end of their time as a postdoc, the only option to stay in the research world is to get a faculty job.
Most enter a faculty job thinking that they will continue to do research, and spend a little time teaching the occasional class. They are initially excited about this, as they see the nobility in teaching the next generation.
But it turns out that most people at a faculty level are given teaching responsibilities that would require full-time work to do well. But they must also be successful at research to make tenure/get promoted/keep their team employed.
They are effectively expected to do two full time jobs. One of which they have spent 15 years training for and already know they love and the other which they may feel is important, but have zero training or experience in.Nobody tells them which bits of research they should or shouldn't be doing, where as the teaching is dictated and micromanaged. Given this is unsurprising that many come to dislike the teaching. Everyone likes to be good at the things they do, but being good at teaching takes a lot of time, time which they may feel they owe to their research (and in particular, their research team, who depends on the lab's success for their livelihood). And even then on the probabilities, they are unlikely to be as good at teaching as they are at research.To add insult to injury, university administration often tells them that something that a teaching load that is taking them 40-50 hours a week should only be taking them 20 hours.
I don't mind the actual hours with students, and I tend to think that if teaching was the only thing I had to do, I might actaully enjoy it (although probably not as much as research).
Upvotes: 2 <issue_comment>username_13: I will briefly give my two cents, as a person that actually likes teaching but complains a lot about it.
The reasons I complain are:
* **Overloaded with teaching**. My time should be split (according to my contract) 50-50 on teaching-research. Unfortunately, I find that I have to spend way more time on teaching/preparing new courses that the department randomly assigns to me.
* **Teaching things that do not interest me**. This goes without saying. Unfortunately, only few of us can teach what they really like. Most of the time, we tend to fill in holes that someone else left behind. Imagine if you are a number theorist and they ask you to teach continuous optimization on a business department.
* **Teaching on the wrong level**. Most people would prefer to teach something close to their expertise and this means naturally more advanced courses suitable for upper undergraduate/graduate level. Unfortunately, I will not get a lot of satisfaction if I am assigned to teach Calculus to 1st semester students.
* **Teaching the same thing over and over**. While for specialized courses close to your expertise it is easy to update them to include new material, not much can be changed in, for example, Calculus. The material and the way of teaching is pretty much the same and very likely to continue be so.
* **Not interesting audience**. It is different to teach calculus to math students and different to liberal arts students.
* Also, a valid concern is that indeed some people do get much better teaching schedule and that is a fact. You may ignore it, but it is also easy to invoke some envy feelings that lower your enthusiasm.
Maybe I am forgetting something here, but you get the idea. Enjoying teaching does come with conditions.
Upvotes: 3
|
2022/12/12
| 1,191
| 4,273
|
<issue_start>username_0: Some years ago I uploaded some images to Wikimedia commons and selected "own work". That images were uploaded under a CC BY-SA 4.0 License.
However, the journal (in which I am attempting to publish an article) uses a CC BY 4.0 license.
My question is, may I use this CC BY-SA 4.0 image in the CC BY 4.0 publication?
Here is the example of an article in another journal who did a similar thing.
<NAME>-L, <NAME>, <NAME>-L
and <NAME>-F (2022) Mahonia vs.
Berberis Unloaded: Generic
Delimitation and Infrafamilial
Classification of Berberidaceae Based
on Plastid Phylogenomics.
Front. Plant Sci. 12:720171.
doi: 10.3389/fpls.2021.720171
[](https://i.stack.imgur.com/HPDyk.jpg)
[](https://i.stack.imgur.com/NShn7.jpg)
Unfortunately my target journal has not a documentation or statement regarding use of creative commons image. I need to know the general or internationally accepted rules regarding such scenario.
**Update: The creative commons page
<https://wiki.creativecommons.org/wiki/ShareAlike_compatibility> clearly states that**
>
> What does one-way or two-way compatibility mean?
>
>
> One-way compatibility means that you may adapt work under one license (X) and apply a second license (Y) to your own contributions, but you may not adapt work under the Y license and apply the X license to your contributions. For example, CC BY is one-way compatible with BY-SA. You may adapt a BY work and apply BY-SA to your contributions, but you may not adapt a BY-SA work and apply BY to your contributions.
>
>
> Two-way compatibility means that you may adapt work under one license (X) and apply a second license (Y), and vice versa.
>
>
>
This makes me really scared - like do i need to add a CC BY 4.0 license on those media files? and as @Ian's answer suggests, should I directly submit the image without a mention that these images were appeared on Wikimedia?<issue_comment>username_1: Generally, once you have given a permissive license to the public you can no longer restrict it. The publisher would need to know about the SA license that the work already carries.
My best guess is that you can use it and the publisher will agree. I doubt that they would reject a paper on that basis alone.
It might be a bit weird, however, if they don't actually say the image is CC BY-SA, but anyone else knowing the permissive terms can use the image without violating license. Your image seems to make it clear. Hopefully the publisher won't obfuscate it.
But you need to inform the publisher. They will work it out with you. But if you fail to inform them then there will be issues.
I'm going to guess here that the publisher will be happy enough since they are open access. The reason that they use CC-BY and not CC-BY-SA is that they want to let you retain rights to derivative works rather than, themselves, taking that retention right away through a more permissive license. But you've already done that.
Upvotes: 1 <issue_comment>username_2: >
> Some years ago **I** uploaded some images to Wikimedia
>
>
>
I have bolded the most important word in this question. *You* own the images. *You* provided Wikimedia and the rest of the world a license to use *your* images provided they give you attribution and share alike.
However, you did not sign away your rights to those images.
From [Creative Commons](https://creativecommons.org/licenses/):
>
> Every license helps creators — we call them licensors if they use our tools — **retain copyright** while allowing others to copy, distribute, and make some uses of their work — at least non-commercially. [Emphasis added]
>
>
>
They are yours to do whatever you want.
This includes releasing the images under **a more permissive license.** CC BY SA 4.0 requires that "adaptations must be shared under the same terms", or *shared alike*.
In contrast, the license required by the publisher, CC BY 4.0, does not require sharing alike, making it **less restrictive.**
[](https://i.stack.imgur.com/5KaT5.png)
You should reference the images to avoid any appearance of self-plagiarism.
Upvotes: 2
|
2022/12/13
| 1,186
| 5,659
|
<issue_start>username_0: To gain extra credit for a course I lead, students can share their notes from a lecture which we intend to use as starting point to develop a better (free) learning material than just sharing slides from the lectures. To this end we explicitly asked the students to share their notes under a Creative Commons license.
At this point I have second thoughts whether this is completely OK ethically/legally. I would intuitively think that requiring students to waive some of their rights as authors would be problematic if the assignment was necessary to complete the course. Doing so for optional extra credit (which then lowers their point limit for the final exam) seems less problematic, but I am not sure this is actually a good enough distinction. Thanks for any notes/feedback.
I should also note that using student's work to build educational material is likely already allowed due to our country's laws (Czechia, EU) on intellectual property where work created to obtain academic credentials is explicitly allowed to be be "used by the academic institution for its internal needs and for educational purposes", no permission from the author required. CC licensing is still a step further, though.<issue_comment>username_1: The operative word for me is "ask". To my mind, there's nothing wrong with asking students to do this. But I agree that tying grades to this request is problematic. This no longer makes it a free choice. And presumably part of the point of this exercise is to encourage students in future to share their work freely for the benefit of others. This point is in my opinion, somewhat undermined if they feel coerced to do it in order to get a better grade.
In my opinion, a better way of doing this would be that the extra credit is only tied to sharing the notes with you. You can attach to this a *request* to licence the notes under CC but the grade should not be dependent on that. I suspect most students will happily do so.
Upvotes: 3 <issue_comment>username_2: You should definitely consult a corporate lawyer at your academic institution for a definitive answer, but I will offer my thoughts.
First, the excerpt that you quoted does not seem to apply to the situation that you are asking about: "work created to obtain academic credentials is explicitly allowed to be 'used by the academic institution for its **internal** needs and for educational purposes'". Licensing work under Creative Commons goes far beyond internal use.
Second, I do not think students should ever be forced to give up their copyright or the freedom to exercise their copyright. (Licensing under Creative Commons would mean that the students forego their freedom to exercise some of their copyright privileges.) Offering extra credit is not directly forcing the students but it is definitely pressuring them by compelling them to forego some of their rights in exchange for free credit. **I would consider this unethical.**
For some background, I am on my institution's research ethics board, and we would definitely not permit a similar situation in a research context. Even though your situation is not a research context, it is similar in that students are being offered extra credit for something which they might otherwise consider to not be in their best interest.
My institution's research ethics board takes a creative approach to research ethics, by which I mean that we try our best to help researchers to design ethical research rather simply saying no to proposals. Along those lines, here is my proposal for something that might work better:
**I propose that you offer extra credit to all students who are willing to share their notes and you let the matter of Creative Commons licensing be completely optional.** For students who decline the Creative Commons license, you give them the credit but do not incorporate their work in what is publicly shared with the rest. However, you should take the time to educate the students on the benefits of such licensing. In particular, highlight that Creative Commons licensing requires people to cite the work when used, and so this would give the students visibility and appreciation. (However, you must be responsible to also inform them of the implications of the irrevocable release of their work.) If you do this, you would probably find that most students would willingly choose the Creative Commons license.
Upvotes: 5 [selected_answer]<issue_comment>username_3: Your question essentially amounts to: "is it ok to sell extra credit to students?" You are offering them marks if they give you something with intrinsic value: the rights to use their notes.
It is probably the case that the monetary value involved is very low, but the principle remains. Would you have problems giving extra credit to students in exchange for buying a coffee and delivering it to your office? Or to those who paid you €5?
**Edited to add**: Discussion in the comments led me to appreciate something that I think is important here. The proposed 'extra credit' task here (publish your notes with a given license) has minimal educational value for the student concerned. This is in marked contrast to alternatives such as "write a Wikipedia article" or "contribute to an open-source software project", where concrete learning outcomes can be identified: e.g. consolidation of knowledge, communication skills, coding experience, etc. In those cases, extra credit is (or at least can be argued to be) given for demonstrating the ability to apply knowledge in 'the real world'. In the OP's case, the credit is simply offered as an inducement to act in a certain way.
Upvotes: -1
|
2022/12/13
| 1,217
| 5,569
|
<issue_start>username_0: I am an electrical engineering PhD student (computational electromagnetics) and recently I learned about systematics reviews, which seem to be the norm in the health sciences. I tried to search for systematic reviews in the most relevants journals in my field:
* IEEE Transactions on Electromagnetic Compatibility
* IEEE Transactions on Antennas and Propagation
* IEEE Transactions on Power Delivery
* International Journal of Electrical Power & Energy Systems
I found **no** systematic review on IEEE journals and [only **one** published this year (2022) on IJEPES](https://www.sciencedirect.com/science/article/pii/S0142061521011133) and, even then, from what I skimmed, it doesn't present the information I expect from a systematic review (namely, [the PRISMA checklist, such as inclusion and exclusion criteria](https://www.prisma-statement.org)). This very superficial research I did suggests that systematic review are uncommon in electrical engineering (I did find some on Software Engineering, though), which raises the question: why aren't systematic reviews present in electrical engineering journals?<issue_comment>username_1: The operative word for me is "ask". To my mind, there's nothing wrong with asking students to do this. But I agree that tying grades to this request is problematic. This no longer makes it a free choice. And presumably part of the point of this exercise is to encourage students in future to share their work freely for the benefit of others. This point is in my opinion, somewhat undermined if they feel coerced to do it in order to get a better grade.
In my opinion, a better way of doing this would be that the extra credit is only tied to sharing the notes with you. You can attach to this a *request* to licence the notes under CC but the grade should not be dependent on that. I suspect most students will happily do so.
Upvotes: 3 <issue_comment>username_2: You should definitely consult a corporate lawyer at your academic institution for a definitive answer, but I will offer my thoughts.
First, the excerpt that you quoted does not seem to apply to the situation that you are asking about: "work created to obtain academic credentials is explicitly allowed to be 'used by the academic institution for its **internal** needs and for educational purposes'". Licensing work under Creative Commons goes far beyond internal use.
Second, I do not think students should ever be forced to give up their copyright or the freedom to exercise their copyright. (Licensing under Creative Commons would mean that the students forego their freedom to exercise some of their copyright privileges.) Offering extra credit is not directly forcing the students but it is definitely pressuring them by compelling them to forego some of their rights in exchange for free credit. **I would consider this unethical.**
For some background, I am on my institution's research ethics board, and we would definitely not permit a similar situation in a research context. Even though your situation is not a research context, it is similar in that students are being offered extra credit for something which they might otherwise consider to not be in their best interest.
My institution's research ethics board takes a creative approach to research ethics, by which I mean that we try our best to help researchers to design ethical research rather simply saying no to proposals. Along those lines, here is my proposal for something that might work better:
**I propose that you offer extra credit to all students who are willing to share their notes and you let the matter of Creative Commons licensing be completely optional.** For students who decline the Creative Commons license, you give them the credit but do not incorporate their work in what is publicly shared with the rest. However, you should take the time to educate the students on the benefits of such licensing. In particular, highlight that Creative Commons licensing requires people to cite the work when used, and so this would give the students visibility and appreciation. (However, you must be responsible to also inform them of the implications of the irrevocable release of their work.) If you do this, you would probably find that most students would willingly choose the Creative Commons license.
Upvotes: 5 [selected_answer]<issue_comment>username_3: Your question essentially amounts to: "is it ok to sell extra credit to students?" You are offering them marks if they give you something with intrinsic value: the rights to use their notes.
It is probably the case that the monetary value involved is very low, but the principle remains. Would you have problems giving extra credit to students in exchange for buying a coffee and delivering it to your office? Or to those who paid you €5?
**Edited to add**: Discussion in the comments led me to appreciate something that I think is important here. The proposed 'extra credit' task here (publish your notes with a given license) has minimal educational value for the student concerned. This is in marked contrast to alternatives such as "write a Wikipedia article" or "contribute to an open-source software project", where concrete learning outcomes can be identified: e.g. consolidation of knowledge, communication skills, coding experience, etc. In those cases, extra credit is (or at least can be argued to be) given for demonstrating the ability to apply knowledge in 'the real world'. In the OP's case, the credit is simply offered as an inducement to act in a certain way.
Upvotes: -1
|
2022/12/13
| 2,221
| 8,644
|
<issue_start>username_0: There are very limited sources on the topic I am working on, so I have used quite a bit of information from a single source and I have cited it. That source has citations in its body and I have cited the original source as well.
Is this okay? Will it be considered to be plagiarized?
English is not my native language, hope I have explained it so that it is understandable.<issue_comment>username_1: You are falling into a very common trap: plagiarism and paraphrasing are almost unrelated concepts.
Plagiarism is all about not giving credit where credit is due. This means one still may commit it even if all of the original words were changed - as long as the amount of intellectual work done by someone else going into your own work is unclear. It is mostly about ideas, but the expression (wording, sentence structure) is also important. Changing the expression is not the main reason we paraphrase, however - one could just keep the original text in quotation marks, after all, and the result would be clear of plagiarism.
Paraphrasing is there to give old ideas a new spin, to provide your own thoughts on the matter, to synthesize something new and not just give the reader a compilation of existing texts.
For your work to have substantial originality, it has to have a lot of material written by you. This means no excessive quotations (definitions of excessive vary by field). Overall, quotes should be used thoughtfully and only on concise, well-expressed thoughts. [A more detailed explanation could be found here](https://www.uhv.edu/curriculum-and-student-achievement/student-success/tutoring/student-resources/a-d/decide-when-to-quote-paraphrase-and-summarize/). Phrases such as "As <NAME> notes, one does not have to rely on their memory if they are being consistently truthful" are bad writing. Paragraph-long quotes are probably unnecessary as well.
In your case, you will probably be dealing with summarizing the source material ([here is another question on SE dealing with borrowing long chunks and paraphrasing](https://academia.stackexchange.com/questions/180415/is-paraphrasing-with-citation-considered-plagiarism)).
In humanities, dealing with scarce sources is common and understandable, in STEM, it is quite rare. "A commentary on (a proof of a theorem)" is almost unheard of, but "A commentary on (a philosophy book or two)" is something ubiquitous.
If you make the amount of borrowing clear, there is no plagiarism. The criteria for originality will be field-dependent.
**EDIT 2**: To give a few examples of covering the existing body of material...
Bad:
* As Jones and Miles explain, "Understanding the topology of contact Riemannian manifolds is essential for underwater basked weaving. Early research in underwater basked weaving has employed a number of more naïve approaches to the structural integrity such as those based on graph cuts (Xu et al., 1959), convex optimization (Nakamura, 1978) and, more recently, finite element methods (Capablanca et al., 2002; Jones et al., 2010)". *This adds no value to the work of Jones and Miles, it should generally be replaced by a reference. You might also be infringing on copyright here.*
* Understanding the topology of contact Riemannian manifolds is essential for underwater basked weaving (Jones and Miles, 2021). Early research in underwater basked weaving has employed a number of more naïve approaches to the structural integrity such as those based on graph cuts (Xu et al., 1959), convex optimization (Nakamura, 1978) and, more recently, finite element methods (Capablanca et al., 2002; Jones et al., 2010). *And this, given the above, is outright plagiarism*.
* Topology of contact Riemannian manifolds is considered indispensable for modern-day underwater basket weaving (Jones and Miles, 2021). Earlier approaches to making the baskets hold together included graph cuts (Xu et al., 1959) and finite element methods (Capablanca et al., 2002). *Still plagiarism, paraphrasing probably makes it even worse*.
* Consider the topology of a contact Riemannian manifold such as (...). *No explanation given why it is even relevant to your research.*
Better:
Jones and Miles (Jones and Miles, 2021) state: "Understanding the topology of contact Riemannian manifolds is essential for underwater basked weaving". This notion exposes the underlying trend in recent research towards making the woven baskets being more efficient in industrial applications by cutting the waste during production, increasing the surface area, and reducing the tension in the handle commonly created by machines such as WeavoTron-3000 (UBW Inc., USA). *The citation is probably unneeded here, but you provide your own thoughts on the matter. If you are, say, writing a master's thesis, WeavoTron-3000 is all the rage, but there are issues which your advisor has tasked you to solve - this might be a good approach.*
Good:
Underwater basket weaving as a discipline has undergone drastic changes in the past decades. Here, we follow (Jones and Miles, 2021) to track its overall history. The main issue plaguing early research on this topic was the optimal density of the twigs, which kept being inconsistent until breakthrough works of the late 1950s (Xu et al., 1959)(Stone et al., 1960). These works used graph cuts as their main approach, however, the results achieved by this technique were also highly volatile with respect to the amount of raw material used. It was solved by applying convex optimization in the now-classic paper by Nakamura (Nakamura, 1978). Industrial development in the following years was generally
highly successful, as evidenced by the growth in production capabilities (World Bank data, 2022), and new questions about the optimization arose. At the turn of the century, two main areas were considered the most prospective for future research (Ivanov et al., 2000): finite element modeling for increasing the structural stability of the baskets and contact Riemannian topology for increasing the underwater basket performance in real-world applications. First is covered by earlier and ongoing research (see e.g. Capablanca et al., 2002; Jones et al., 2010), and we are focusing on the second in this work.
As you can see, the "good" option involves a substantial expansion on whatever was expressed in any single work you have found. There is no point in following the textbook descriptions.
For theses, it is a good practice to punctuate the review section with references to increasing levels of concept difficulty: 2+2=4. The integral of a differential form over the boundary of an orientable manifold is equal to the integral of its exterior derivative over the manifold (generalized Stokes theorem; see e.g. (Tu, 2010)). A Sasakian manifold can be considered as an odd-dimensional analogue of a Kihleriaa manifold (Tanno, 1968). That way if the reader find themselves too deep into the weeds, they would have a good starting point. In that, it is similar to answering here on SE: the reader should be able to follow your text without constantly diving into the references (here it is additionally motivated by the possibility of links dying), but if a concept was already covered well elsewhere, no more than a short annotation is needed.
Upvotes: 2 <issue_comment>username_2: You have two concerns. The [answer of username_1](https://academia.stackexchange.com/a/191573/75368) covers plagiarism, but you also need to consider copyright.
It is possible to [violate copyright even with paraphrasing](https://en.wikipedia.org/wiki/Paraphrasing_of_copyrighted_material). One reason for this is that copyright covers, among other things, the right to make *derived works*. If you "use" a lot of a copyrighted work using only paraphrasing, It might be considered a copyright violation, even if you cite to avoid plagiarizing.
The way to avoid such a violation is to ask the copyright holder, usually a publisher, for a license for your intended use. For purposes of a thesis this might well be permitted. You can also look at a publisher's site to see if there is already a permissive license for such things. But a paraphrasing long sections is probably not considered "fair use" most places, even for academic use.
One question that comes up in copyright is whether the "copy" (or derived work) diminishes the "value" of the original to the copyright holder. If it does not then there are unlikely to be objections, but it is safer to ask than to assume.
---
Actually, a third issue for a thesis is what your advisor thinks of it. You should have a discussion with them if you haven't already.
Upvotes: 1
|
2022/12/13
| 1,035
| 4,487
|
<issue_start>username_0: I'm a TT assistant professor at a R1 school, and I just recently accepted an offer from a private company. I'm planning to talk with the department head and let him know and I was wondering if anyone has any thoughts how to approach this (with maybe examples if you have anything in mind) and quit gracefully ? Also do you think one month ahead is enough to let him know before the departure ?
Update: Ok, just wanted to share some update. I talked with the head today and something along this answer from @caleb-stanford worked really well.
For example: "While I have really enjoyed my time here, it's become increasingly clear to me that being in academia long-term is not the right fit. In addition, I have personal and family constraints that would require me to be away from for extended periods of time, and make having a more flexible work location a necessity for me in the future. I have nothing but positive things to say about . I'm sorry for the short notice; I'm letting you know as soon as possible, so that you can hopefully work to make plans accordingly for my departure."<issue_comment>username_1: You have a contract and it may detail the requirements here. If you are teaching then you should give the head enough notice that they can replace you without negative impacts on students. If you have thesis advisees you need to consider their needs, ethically, if not necessarily legally.
Quitting, but continuing to end of term is likely to be fine, though the head may need to scramble to fill any teaching slots for next term.
Other than consideration for students who will be impacted I don't see a lot of issues, provided that your contract isn't too restrictive.
If your pay is pro-rated in any way, then some adjustments might need to be made but probably more likely in your favor than otherwise.
For the administrative details a month seems reasonable. For student and especially advisee outcomes it might be more complicated.
Upvotes: 2 <issue_comment>username_2: Ultimately, there's no way to say it nicely. You're leaving, and that makes things difficult for them, so the best thing you can do is let them know as soon as possible. They will want to know why, so you have to be prepared with a *well-crafted reason* for your departure.
One crucial detail that I think is missing from your post: do you want to remain on good terms with the university, the department chair, and your colleagues, or are you leaving academia for good and have no interest in maintaining connections?
* In the former case, you have to be a bit careful. Firstly, you should try to let them know as soon as possible -- I would think, more than just a month in advance! And second, you should be careful about your reason. It's generally advised not to make it about the university or the department; things that they can change.
For example, don't say: "I haven't been getting along with my colleagues here and I feel the university is not supporting me enough financially." That will make them feel defensive, and in the worst case, it could leave them thinking that they could have retained you if you had only talked to them earlier, and you could have worked something out.
Instead, it's generally more effective to make your leaving *not about them.* It can be the university location, or personal/family constraints, or you simply have decided that academia is not for you. Since you want to remain friendly with your colleagues, you should be as honest as possible, without unnecessarily insulting them.
For example: "While I have really enjoyed my time here, it's become increasingly clear to me that being in academia long-term is not the right fit. In addition, I have personal and family constraints that would require me to be away from for extended periods of time, and make having a more flexible work location a necessity for me in the future. I have nothing but positive things to say about . I'm sorry for the short notice; I'm letting you know as soon as possible, so that you can hopefully work to make plans accordingly for my departure."
* In the latter case, you have nothing to lose, so you can share some negative details about the department if you so choose, as long as you are careful in expressing it. In this case, you can also choose to leave on short notice (as long as it is allowed by your contract); it may upset them, but at the end of the day, you have to do what's best for you.
Upvotes: 4 [selected_answer]
|
2022/12/13
| 984
| 4,184
|
<issue_start>username_0: I'm half way through my MBA, and my highest mark is around 73%, which I've had in several modules and pieces of coursework. I've contacted the university to ask for concrete examples of previous answers for exams I've sat that have received > 80% but they won't provide any. I've read several books on critical thinking and essay writing, but still something's missing.
The university has published guidelines for > 80%, and they're things like:
>
> Exploration of topic showing excellent knowledge & understanding through thorough & appropriate research. Impressive choice and range of appropriate content.
>
>
>
and other similar vague statements.
Specifically, how can I get over > 80% for an exam/coursework at masters level, or should I take this further with the university?<issue_comment>username_1: You have a contract and it may detail the requirements here. If you are teaching then you should give the head enough notice that they can replace you without negative impacts on students. If you have thesis advisees you need to consider their needs, ethically, if not necessarily legally.
Quitting, but continuing to end of term is likely to be fine, though the head may need to scramble to fill any teaching slots for next term.
Other than consideration for students who will be impacted I don't see a lot of issues, provided that your contract isn't too restrictive.
If your pay is pro-rated in any way, then some adjustments might need to be made but probably more likely in your favor than otherwise.
For the administrative details a month seems reasonable. For student and especially advisee outcomes it might be more complicated.
Upvotes: 2 <issue_comment>username_2: Ultimately, there's no way to say it nicely. You're leaving, and that makes things difficult for them, so the best thing you can do is let them know as soon as possible. They will want to know why, so you have to be prepared with a *well-crafted reason* for your departure.
One crucial detail that I think is missing from your post: do you want to remain on good terms with the university, the department chair, and your colleagues, or are you leaving academia for good and have no interest in maintaining connections?
* In the former case, you have to be a bit careful. Firstly, you should try to let them know as soon as possible -- I would think, more than just a month in advance! And second, you should be careful about your reason. It's generally advised not to make it about the university or the department; things that they can change.
For example, don't say: "I haven't been getting along with my colleagues here and I feel the university is not supporting me enough financially." That will make them feel defensive, and in the worst case, it could leave them thinking that they could have retained you if you had only talked to them earlier, and you could have worked something out.
Instead, it's generally more effective to make your leaving *not about them.* It can be the university location, or personal/family constraints, or you simply have decided that academia is not for you. Since you want to remain friendly with your colleagues, you should be as honest as possible, without unnecessarily insulting them.
For example: "While I have really enjoyed my time here, it's become increasingly clear to me that being in academia long-term is not the right fit. In addition, I have personal and family constraints that would require me to be away from for extended periods of time, and make having a more flexible work location a necessity for me in the future. I have nothing but positive things to say about . I'm sorry for the short notice; I'm letting you know as soon as possible, so that you can hopefully work to make plans accordingly for my departure."
* In the latter case, you have nothing to lose, so you can share some negative details about the department if you so choose, as long as you are careful in expressing it. In this case, you can also choose to leave on short notice (as long as it is allowed by your contract); it may upset them, but at the end of the day, you have to do what's best for you.
Upvotes: 4 [selected_answer]
|
2022/12/13
| 333
| 1,484
|
<issue_start>username_0: I received a major revision from IEEE Transactions, however one reviewer left no comments. Earlier this already happened to me and the reviewer had left the comments in “Comments to Editors” section. Should I contact the editors in this case and ask if there are some comments? Or should I ignore it as it is simply not my problem, but their mistake<issue_comment>username_1: You should definitely contact the editor. The reviewer may have forgotten to attach his/her comments. If it is not a mistake, then in your response, just say 'No comments to address'. If it's a mistake and you do not address the comments, then the missing comments will be used in the next round and thus delaying the review process.
Upvotes: 5 <issue_comment>username_2: You can certainly ask the editor to clarify if an error occurred. However, I would say this is not strictly necessary. An alternative would be to simply address the issue in your letter when you resubmit, something along the lines of:
>
> We revised our manuscript taking into account all reviewer comments, except for reviewer 2, who left no comments.
>
>
>
You would probably then hear back from the editor if there were indeed reviewer comments you were unaware of. The worst case scenario would be to lose some time before publication, it is nothing that would be held against you in the decision to publish (or not). But of course, if you really cannot or do not want to afford that, ask.
Upvotes: 1
|
2022/12/13
| 1,154
| 4,549
|
<issue_start>username_0: I'm looking at the [CNRS](https://www.cnrs.fr/en) CR (=[charge de recherche](http://postes.smai.emath.fr/apres/accueil/livret/livret004.html) - a research only academic position with no teaching duties)
recruitment process, and in the booklet that walks candidates through the application process, it says :
*"You will need to provide a clear, structured presentation of your research project and its scientific interest. You are free to choose the way in which you present your project. There is no limit to the number of pages. Your research proposal should be aimed at one
CNRS unit. You will need to indicate the laboratory or laboratories where your project could be conducted and make at least 2 wishes."*
I talked to a few French academics, and I got the idea that it's instructive to network with the team, by contacting them, giving a talk etc, and if they belive in your ability, they'll accept to write a joint proposal with you. **Is this correct at all? And if yes, who writes most of the proposal - the candidate or the team melbers?**
**Timeline:** What's also not clear to me is *when* the candidate/applicant is supposed to do so? For example, it's December now, and the [competition has been announced on the CNRS website](https://www.cnrs.fr/en/competitive-entrance-examinations-researchers-womenmen), so does this mean the potential candidates are supposed to have *already* contacted a team and developed a joint research project? So if I were to be an applicant, is it *already late* to contact a team and develop a research project? Am I supposed to have done this already?
Some general lesser known insights on writing this joint proposal would be appreciated too!
Thank you in advance!<issue_comment>username_1: **You are writing 95%+ of the proposal**, some of your future potential colleagues in the lab of your choice will help you figure out how you and your project integrate in the lab and will help you write that one specific section, but writing the project is on you.
Yes you are supposed to get in touch and get to know them, researchers will want to discuss how your research will make the lab better and will also want to know you as a person since you will be there for many years. If you still haven't contacted a lab it's very likely too late to apply for the next deadline, also consider the holiday break. It's not like a regular tenure application in which you send your CV and then they decide if they call you for an interview.
If your PhD diploma is not in French you will also need to get it translated.
Upvotes: 2 <issue_comment>username_2: I speak from the applied math side (section 26 or 61)
>
> I talked to a few French academics, and I got the idea that it's instructive to network with the team, by contacting them, giving a talk etc, and if they belive in your ability, they'll accept to write a joint proposal with you. Is this correct at all?
>
>
>
Yes, you should contact various teams you're interested in (at least 2) and give a seminar talk, visit them to see if you're a "good fit", etc. This takes place before December, usually a successful applicant will start this process in August or September.
>
> And if yes, who writes most of the proposal - the candidate or the team melbers?
>
>
>
The proposal should be all you, the candidate. You might discuss it with the lab members and get ideas from them but you should be the one touching the keys to make the documents.
>
> Timeline: What's also not clear to me is when the candidate/applicant is supposed to do so? For example, it's December now, and the competition has been announced on the CNRS website, so does this mean the potential candidates are supposed to have already contacted a team and developed a joint research project?
>
>
>
Yes, and it's unfortunate that it's not advertised this way. The process starts sooner than you think and by the time you find this out it's probably too late.
>
> So if I were to be an applicant, is it already late to contact a team and develop a research project? Am I supposed to have done this already?
>
>
>
Yes it's effectively too late unless you're a VERY strong applicant.
If you remain interested, you will want to contact past winners of the concours for the positions in your section and see if they're willing to let you look at their proposals to gain insights. There is also <http://postes.smai.emath.fr/current/index.php> for the general positions in france, I can't remember if CNRS gets posted there too.
Upvotes: 1
|
2022/12/14
| 3,960
| 15,640
|
<issue_start>username_0: No one from my UK university appears able to give me a concrete example of an answer for coursework or an exam that would get over 80% for any of my MBA modules.
It was suggested [here](https://academia.stackexchange.com/questions/191585/) that this is because scores over 80% are not really achievable in the UK; that such a score would indicate having done better than the instructor could have done themselves. Is this true? Are these scores basically unachievable in the UK?<issue_comment>username_1: No, though it depends on the course and probably depends on the university. In some courses e.g. pure mathematics it has to be possible to get 100%. Perhaps in an MBA at your university it is very rare.
>
> such a score would indicate having done better than the instructor could have done themselves.
>
>
>
This doesn't seem very relevant or meaningful. It is possible for a student to do better than an instructor would have done. Perhaps the student works very fast.
Upvotes: 4 <issue_comment>username_2: I have to admit, as somebody coming from a different academic system, I also find it quite odd that in the UK system, 70% is enough for the best grade ("first"). As a result, there is a fairly *wide range* of marks (70-100) in this highest "grade band".
As you say, 80+ (or 85+) marks are indeed extremely rare and definitely more of an exception than the rule. Your typical CRGs (Criterion Reference Grids - student-facing, module-specific marking guidelines) would not usually contain any guidance on any nuance above 70%. However, here are some quotes from my Universities general [assessment policy](https://secretariat.blogs.lincoln.ac.uk/files/2022/09/Management-of-Assessment-Policy-1.pdf). I know you asked about an MBA (so, Master level), but I am including some quotes referring to undergraduate, for completeness:
| Level | Band | Description |
| --- | --- | --- |
| UG | 80-89 | Work consistent with first class performance which is exceptional in most areas. |
| UG | 90-100 | Work consistent with first class performance which is exceptional in all areas. |
| UG | 85-100 | The student has shown exceptional knowledge and understanding, well beyond the threshold expectation of a graduate at this level and *significantly beyond what has been taught*. |
| PG | 80-89 | Work consistent with a distinction and is exceptional in most areas. |
| PG | 90-100 | Work consistent with a performance which is exceptional in all areas and could have *the potential for publication*. |
| PG | 85-100 | This work meets and often *exceeds the standard for distinction*, as described in the 70-85 band, across all subcategories of criteria. Typically, the work is of such a quality that indicates a *student capable of doctoral research in the discipline* and, in principle, has *potential for publication or exhibition with* further refinement as appropriate. Reflects critically on own positionality, nature and status of knowledge with discipline. |
For me, personally, even the description of something being "exceptional in all areas" is quite vague -- but I find the Postgraduate level explanation for the 85-100 band to be the one that clarified it up for me: a mark of 85+ indicates a student who would be a strong doctoral candidate in the topic. In my experience so far, there are about 1-2 students with the potential to do doctorate-level research in every cohort of 100-200 students at the MSc level at my University. This about matches my observed frequency of marks in the 85+ range amongst our student body.
For more details, you can check our [assessment policy](https://secretariat.blogs.lincoln.ac.uk/files/2022/09/Management-of-Assessment-Policy-1.pdf) yourself (or maybe try and find an equivalent document from your own University) - the parts relevant to this question are sections 3.2 (UG), 4.3 (PG), Appendix 2 (UG) and Appendix 3 (PG).
Upvotes: 6 [selected_answer]<issue_comment>username_3: My perspective as someone who went to the University in the UK is that it depends on the nature of the course or exam.
Some exam questions, particularly in technical subjects have a very definite "right answer". On these it is very much possible to get full marks if you know the material, show your working and don't make any mistakes. If you know the course material well and are good at not making mistakes it's very much possible to get marks in the 90s on such exams.
On the other hand, as you move from exams to cousework or you move to less-technical more human subjects, the marking becomes rather more subjective. The impression I get is that academics are far more reluctant to give super-high marks for such answers.
My brother who did a maths degree got quite a lot of marks in the 90s, I did an EEE degree and I think I got one course in the 90s, some in the 80s but also many much lower.
Upvotes: 2 <issue_comment>username_4: ***In short, it's not impossible. The proof being that I averaged around 83% in my time at a UK university doing an electronics engineering degree.***
---
In long, it does depend somewhat on the subject matter. If your exams consist of right or wrong answers, like in maths, then getting 100% is a simple matter of doing the calculations/proofs correctly. If your exams are more analytical/essay/project base, then it's a lot harder. I had a mixture of both, but I actually tended to do better in the longer-form stuff than the right/wrong stuff, but I'm somewhat of an anomaly there.
As a general rule, anything that earns above 70-80% should be of a publishable standard (obviously it won't meet the requirements of novelty, but it should be of that standard of quality). This is an *extremely* high bar for an undergraduate, but it can be achieved. You just generally have to go above and beyond what's expected of you in the assignment.
Upvotes: 2 <issue_comment>username_5: This will depend significantly on the university and the course. From my experience of teaching computer science at two UK universities, coursework marks in the 80s and 90s were very much achievable and some students did achieve them.
I once set a coursework project with a completely objective marking scheme which was communicated to students in advance, and one student (out of about a hundred) scored 100%. There was no leeway for me to give that student less than 100% on that assessment without grading according to different rules than I had set out, nor would I have wanted to - they earned it fair and square.
That's obviously an extreme example, but no eyebrows were raised at the exam board when the marks were approved, and indeed it was not that unusual on the large courses to see a few coursework marks in the 90+ range even when the marking schemes required more academic judgement from the examiners. But I suppose there were other courses, including some at the same universities, where the distributions of marks were substantially different.
So the claim might be true at particular universities or on particular courses, just you can't really make general statements about all UK universities like this because each lecturer sets their own standards, and the university would only demand a lecturer change their module's standards in rare cases.
Upvotes: 3 <issue_comment>username_6: It depends entirely on what university you go to (and when you go there, because these things change). It may depend on your subject too.
At the first university I went to, it was possible to get very high marks indeed (I saw a few marks awarded in the 90%s). However, marks over 80 were rare. A mark of 70% got the student the equivalent of first, or a distinction if the course was post-graduate.
At UCL, which I attended many years later, marks substantially above 70% were extremely rare, certainly in my subject. My supervisor, the head of department, advised me that you were effectively marked out of 72 on assignments (not quite so harsh in exams). For a distinction you needed 70%. You needed to do spectacularly well to get 71%. That's what was said. This should be taken with a pinch of salt, however, because they gave out a 73% mark on at least one occasion.
Upvotes: 2 <issue_comment>username_7: Firstly, most UK universities I've been involved with explicitly make a point of saying that although they mark on a 100 point scale, that it shouldn't be thought of as a 0%-100% scale: getting evertying wrong doesn't get you 0, and getting everything correct doesn't get you 100, but more importantly, getting half of things correct would not score you 50.
Once upon a time we used to mark on a 1-16 scale, where a mark of 16 corresponded to what we would now give 80, and I think that was much less confusing.
Lots of comments here about marking things that are objectively correct or incorrect. Worth pointing out that where we have such elements, getting everything correct would not get you a mark of 100 (because 100 points =/= 100%). Actually, the only exam where all questions on the exam have right/wrong answers is a first year multiple choice exam, and that's the only exam we have curve grading for.
At Oxbridge, I believe mathematics exams and home works are marked alpha, beta, gamma to avoid confusion. Where you only get an alpha if you not only solve the problem, but solve it with particular elegance.
Certainly when I was a Cambridge undergrad (in Biology), to get 70 use needed to demonstrate one or more of:
* Writing with flare and verve
* Evidence of original and novel thought
* Knowledge and understanding beyond that in the taught curriculum.
and that only got you 70!
Where I am now we have the following criteria:
* 72, 75: student meets the marker's full expectations on most criteria. Translation: If you wrote a list of the things a student could be expected to achieve in different categories (Accuracy, Breadth, structure, reading outside the course etc), then they tick all the boxes, in most (but not neccessarily all) of those boxes.
* 80: Student meets all of the marker's expectations for the best that could be achieved in the circumstances. Translation: What ever the examienr could have written down as things required of the student, they have fullfilled them all. Taking into account that this is a 1 hour exam essay/1 week course. This will almost always include knowledge outside that taught on the course.
* 80-90: Student exceeds what the examiner could have expected of them under the circumstances. Translation: The student went above and beyond what the examiner thought possible, and this would also most always include evidence of extensive personal reading beyond the recommended reading, and evidence of critical or original thinking.
* 90-100: Student substantially exceeds what the examiner could have expected to produce an exceptional piece of work. Translation - don't even bother thinking about it. There is probably a joke somewhere about 100 meaning perfect, but only god is perfect, and since god isn't a student at the uni, no one will ever get 100.
It is possible to get above an 80 on an individual piece of work, indeed, there will probably be one or two 85 on our all-student courses (around 100-150 students) every year. I can think of two grades of 90 that I have given in my 8 years marking undergrad and taught-postgrad work. Both for coursework where an undergrad student went above and beyond to produce work that was more or less worthy of publication.
One way to think of it might be that relationship between quality of work and grade is logistic: with increased quality, the grade asymptotically approaches 100, but never actually gets there.
---
A couple of other points:
* the range of grades given is something repeatedly brought up by external examiners, and an exhortation to "use the full range of marks available" is something I've seen on every external examiners report.
* Your department might be reluctant to show you a piece of work with an 80+ grade, because they don't want you using it as a template. At least part of getting a good grade is about being original, and not following a template.
Upvotes: 3 <issue_comment>username_8: This question is unanswerable, as what counts as a good mark varies institution to institution, subject to subject. It's not uncommon for 70% or less to be enough for a 1st, and 40% for a pass, with much higher scores attainable, but probably only if you are going to get a nobel prize in the future. From what I have seen, institutions that forbid the same staff teaching and examining the same year group of students often have lower scores for high final grades. Many (but not all) UK universities implement this. Obviously, if the lecturer is also examining, then the examination is likely to be much closer to the lectures and marks are higher.
Upvotes: 0 <issue_comment>username_9: It really depends on the test. In many subjects, there are hard, medium, and easy questions. In a test you need a mix so you can grade everyone reasonably well. My first maths tests at university were enough hard questions to fill the whole time, enough medium questions to fill the whole time, and enough easier questions to fill the whole time, so doing *everything* was physically impossible. With one exception, nobody scored more than 50% ever.
Upvotes: 0 <issue_comment>username_10: I will give an anecdotal answer:
A long, long time ago (mid '80s) I was a graduate student at Cambridge University, and got involved in "marking" entrance exam papers. The section I was asked to mark had a potential score of 40 points, and I was supposed to score "leniently enough" (in accordance to a rubric I was given) to get a median of 20 points. Basically, "a hint of a correct answer" would get a fraction of the available points (10 questions with 4 max points each). But this was a deliberately HARD exam - trying to select the few that were exceptional.
After grading about 100 papers (where one submission would typically comprise 15 - 20 pages of "answers") with a median close to 20 points, I found myself holding a 4 page answer sheet. "Oh, here we go" I thought. How wrong I was...
4 points; 4 points; 4 points; 3 points; 4 points; ...
When I was all done, using the same criteria that was getting me the median of 20 points, this person had 37 out of a possible 40 points. Using just 4 sheets of paper, where the "wafflers" had needed 15 to 20.
Truly exceptional. Rare? Yes. Possible? Apparently. 37/40 is over 90%.
Upvotes: 0 <issue_comment>username_11: UK education system is very peculiar and full of lies (as everywhere).
First, as long as you pay your tuition, the probability of failing a course and module depends on not doing ANYTHING. in fact, failing students is bad for the rankings, so why should they fail them? so, having a pass 50, is super achievable.
then, to get above 70 depends on how objective the questions are, and how fair the exams are.
Even in engineering, there are subjective questions to which the grade falls under the opinion of the person who assess the exams (PhD students may times). Many times they ask things they never taught. Uk education is based on self independent study rather than what they teach at lectures (most lectures suck at teaching), so it is kind to see of students have done more research about the topic, but then, that keeps being subjective and not objective. it falls under the opinion of the person who marks.
However, I can tell you if you are a good self independent student who covers the modules, and do a little bit more than the average, more than 70 is achievable. Of course, it depends on the person who marks.
Upvotes: 1
|
2022/12/14
| 523
| 2,240
|
<issue_start>username_0: I am currently doing my PhD (my third year just started). My field is empirical finance. My PhD consists of writing three papers and is set to last four years.
My supervisor does basically nothing. In the beginning a friend of mine who started his PhD with me and I started working on a (in my eyes) pretty meaningless research idea that my supervisor came up with. We are now nearly finished with the project and I consider it a rather bad project. The results are not interesting, methodologically it is not interesting.
It would probably suffice as one of three PhD papers, especially because I have another really good one finished already. But it stresses me out just thinking about the paper - I don’t like it at all. However, my coauthor definitely wants to finish the project. I am now thinking of dropping coauthorship. Is this the reasonable thing to do here? Is it bad to be coauthor on a bad paper, or use it as a PhD paper, respectively? Or does it not really matter as long my supervisor says it’s okay?<issue_comment>username_1: You'll be surprised by what other people think is good or bad. There may be value in some parts of your paper, or that your paper may trigger some other questions or research ideas.
I recall a paper in my area that presented a very simple solution to a problem. It is published in a low quality conference as well. However, to date it has 20K+ cites. This is because everybody uses it as a benchmark.
I would classify a 'bad' paper as one that contains non-ethical elements; e.g., plagiarism or fabricated results.
Upvotes: 7 [selected_answer]<issue_comment>username_2: You've said the research idea is meaningless, the methodology and results are not interesting, and you just don't like it.
I can't be sure but it sounds like you've done the science equivalent of grunt work:
Pedestrian research of a bland question using standard techniques and producing completely normal results.
None of that makes it a bad paper or bad science. Somebody's got to do the boring research. As long as it's done properly and competently then you've done the entire world a favour by documenting and publishing your results so that nobody else has to do it in future.
Upvotes: 3
|
2022/12/14
| 3,100
| 12,638
|
<issue_start>username_0: My work deals with transformations in matter, wherein one physical form (a 'phase') changes to another. The initial form has often traditionally been called the 'parent phase' and the transformed form has been called the 'daughter phase'. I think this borrows from nuclear physics, where a 'parent' nucleus transmutes into a 'daughter' nucleus; which may further transmute, becoming a 'parent' in turn.
Off late, it has been pointed out (in private conversation) that 'parent-daughter' is not in the spirit of inclusive language. I'm taking this at face value, and would like suitable alternatives.
\*\*It has been pointed out that criticism of this premise should be allowed space- I welcome this too, though the primary interest is in alternative terminology.
\*\*Several commenters have suggested replacing 'daughter' with the gender-neutral 'child'. This is already a great improvement, but there could be another apprehension; the 'parent' and 'child' often don't co-exist, since one replaces the other (to various degrees). It is not uncommon to describe the transformation as one phase 'consuming' the other. I speculate that this aspect may lead to some discomfort (it should apply equally to nuclides then). This caveat somewhat distinguishes this case from data structures example, where the child (class for eg.) inherits from the parent, but both continue to exist.
'Precursor-product' is one possibility, but this (possibly) carries different connotations in chemistry (in my limited understanding, precursors are secondary to the product in value terms- although this could be my ignorance). 'Pre-transformation' /'post-transformation' could work, but they sound clumsy. 'Initial/final' is not accurate nor properly descriptive. 'Phase 1/Phase 2' could also work, but they are tedious because the reader has to refer to what 1 and 2 mean.
Are there any suitable alternatives, either new, or existing elsewhere that could fit the requirement?<issue_comment>username_1: How about "parent" and "child"? By keeping "parent", you are maintaining continuity with the terminology that others are familiar with, which is generally good for scientific communication. By using "child", the relationship is equally clear as "daughter". For those who are not familiar with the parent-daughter convention, parent-child is equally clear. For those who are used to parent-daughter, then it will be fairly clear that you are being explicitly gender-neutral without a significant change in meaning.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Sorry. I'm sensitive about such things and inclusivity has high value for me, but I don't see any issue in the "traditional" terms. They aren't pejorative. They aren't intended to be sexist. They don't exclude. They don't refer in any way to humans. It is just a metaphor and already well understood.
I see no issue with using "child", though it is a biological metaphor for something non-biological, just as "daughter" is. "Derived" might be accurate if you think you really need to be (and seem) "politically correct".
Using non-traditional terms in new papers should have a high bar for acceptance, since they can confuse people. The old papers on which a field is built suddenly become either less understandable or "suspect". It isn't my field, so not my place to set that bar, but it should be a consideration.
I'll note, for the record that it was progressives/liberals in my daughter's generation, many at women's colleges, that initiated the term "politically correct" for such overly "sensitive" thinking among their own peers. Yes, they understood sexism pretty clearly, but also understood that some things are just ... searching for a non-offensive word here and not coming up with it. Maybe "childish" works.
---
What does "inclusivity" even *mean* for transformations of matter?
Upvotes: 6 <issue_comment>username_3: As @wimi [already commented](https://academia.stackexchange.com/questions/191607/inclusive-language-alternatives-to-parent-daughter-for-inanimate-objects#comment517405_191607), there is a simple replacement, use the "parent and child" analogy as it is used in the programming world, instead of the "parent and daughter".
Regarding the issue that:
>
> "the 'parent' and 'child' often don't co-exist"
>
>
>
It was already an issue intrinsic in the 'parent' and 'daughter'. As <NAME> once said in his blog, make things as simple as possible, but not simpler.
If the parent and child analogy does not work, use initial phase and final phase, or whatever word will work (if you feel describing phase 1 and phase 2 as a burden).
Upvotes: 4 <issue_comment>username_4: **Frame challenge:** There is nothing "non-inclusive" about using the word daughter as a metaphor.
Your assumption that the "daughter" terminology is non-inclusive is grounded in an apparent belief that using any gendered word in a metaphorical context is "non-inclusive" to people whose gender was not referenced. This belief reminds me of a joke about a mother who gave a present of two ties to her son-in-law. On his next visit he wore one of them, thinking that would please her. Offended, she asked “What, you don’t like the other one?”
More seriously, I see several problems with this tendency to automatically classify as "non-inclusive" any non-literal mention of some but not all genders. First, you are ignoring the fact that many languages, collectively spoken by billions of people around the world, assign a gender to all nouns. (It may very well be the case that the use of "daughter" in the scientific context you are asking about can be traced back to one of those languages, see below.) Thus, this particular notion of non-inclusiveness strikes me as being very Anglocentric, to an extent that could even reasonably be perceived as offensive by speakers of some of those other languages.
Second, I find it problematic that the belief that something is non-inclusive cannot be argued with, since if you disagree, then by the self-reinforcing beliefs of some of the people who promote these kinds of concepts, you are by definition "against inclusion", a sexist, bigot, etc. Who wants to risk being labeled with such terms? In other words, some of the beliefs around "inclusive language" strike me as being part of "a closed system of logic" (a term that I believe was originally used as part of a set of criteria for recognizing cults, see [here](https://www.cultrecover.com/sites/default/files/pdfs/singers_6conditions.pdf)). When I see self-justifying logic of this type, I am immediately put on my guard, and would like to ask: forget vague terms like "non-inclusive" that anyone can define in a way that suits their preferred narrative; please explain, what exactly makes this particular use of language *bad*?
Third, let's consider the specific term "daughter" you are asking about. You've stated that the use of this term in the scientific context of your research area originates in terms like "daughter nucleus" in nuclear physics. The historical origin of that term is discussed [here](https://hsm.stackexchange.com/questions/13009/what-is-the-history-of-the-use-of-the-word-daughter-for-a-decay-product-in-nucle). Meitner and Frisch, the two discoverers of nuclear fission who apparently introduced the "daughter" terminology, were both Austrian-born scientists (i.e., speakers of a language with gendered nouns). You could say they were an "inclusive" team -- one of them was a woman, the other a man. Moreover, it looks like they borrowed their terms from an earlier accepted terminology in *biology* of referring to cells produced by other cells as "daughter cells" (with the original cell being termed the "mother cell"). That terminology, which is discussed [here](https://www.quora.com/Why-are-daughter-cells-not-known-as-son-cells), seems pretty logical, as it is actually based on a scientifically reasonable metaphor that only "female" things can produce their own offspring.
Given this more historically informed way of looking at things, is it still tenable to argue that "daughter phase" is non-inclusive? It seems to me that if you still support this premise, in order to be consistent you would also have to deem the etymologically related uses of "daughter nucleus" in nuclear physics and "daughter/mother cell" in biology as problematic and in need of complete overhaul. So basically you'd be rejecting much of the accepted terminology of multiple areas of science (as well as the German language, French, Spanish, etc). Well, perhaps some people would hold such extreme positions. But in my opinion, this way lies nothing but madness.
**Summary:** we should seek to be inclusive wherever possible, sure. But this issue simply isn't an example where that principle has any relevance.
Upvotes: 5 <issue_comment>username_5: For situations in which I wanted to discuss the transformation of a particle or system from a "parent" (the word I used) state into multiple subsequent states, which were potentially quite different in character, I used (instead of "daughter") the word "progeny." You might find that useful.
Upvotes: 3 <issue_comment>username_6: As a software engineer, I found myself in need of this metaphor many times when describing data structures and state transitions. I always used "parent" and "child", not because I didn't want to offend anyone, but because if I used "mother" and "daughter" some of my readers might wonder if I was planning to use "father" and "son" for some other relationship, and holding "parent" and "child" in reserve for the more general case.
Software engineers tend to overthink everything. However we needed to be careful and precise with language because from c. 1956 to c. 1984 there was very little common idiom in software. Most of these data structures and state transitions were being described for the first time.
Upvotes: 4 <issue_comment>username_7: Might "parent"/"offspring" avoid the negative connotations of "child"?
Otherwise, I have a vague feeling that "parent"/"daughter" is used for asexual reproduction, which seems to be the best metaphor here.
Upvotes: 1 <issue_comment>username_8: >
> The initial form has often traditionally been called the 'parent phase' and the transformed form has been called the 'daughter phase'.
>
>
>
In my experience, this is not true. Phases of matter are traditionally given much less helpful names. Some examples are:
* Phases of water ice are named with Roman numerals
* Phases of certain crystals are named with lower case Greek letters
* Crystal phases are named according to space group symbols, for which there are nine different conventions listed in Wikipedia
They are also names like solid, liquid, supercritical, glass, etc.
"parent/daughter" is rather non-specific, and in my experience, non-traditional. It would be better to use one of the more traditional names.
Upvotes: 1 <issue_comment>username_9: A lot of good suggestions already, here is a simple terminology from chemistry:
**educt - product**
The educt is a material before a chemical reaction, the product what comes out of it. This seems to hit the nail of what your first paragraph describes. It is short and devoid of biological connotation.
Upvotes: 2 <issue_comment>username_10: Root/Trunk, Branch, Leaf
------------------------
Using a tree analogy takes away any connection to parents and children, etc. You'd use "Root" (or possibly "Trunk") to refer to the oldest ancestor. Any children of the root are "Branches", which may have their own Branches, etc. Children without any Branches are called "Leaves".
This terminology is often used in computer data structures, but can be applied in other circumstances as well. One thing it also gets around is the issue that the "Parent/Child" analogy is somewhat inaccurate, in that in reality, a child has not 1 but 2 parents (assuming normal sexual reproduction as opposed to mitosis or something), so the analogy breaks down when you consider that. (Yes, a tree has multiple branching roots as well, but we generally restrict the analogy to the part of the tree that's above ground.)
Only problem with this is applying to the original example of atomic nuclei, as it implies that the elements are all still *connected* to each other as they would be on an actual tree, but without knowing what specific case you're talking about it's hard to say whether this would be a good or bad thing.
Upvotes: 2 <issue_comment>username_11: How about any of the following pairs?
* input, output
* source, destination
* initial, final
* previous, next
* old, new
Upvotes: 1
|
2022/12/14
| 2,884
| 11,724
|
<issue_start>username_0: I plead guilty and was suspended for plagiarism (for a lab exercise) during my first year in undergrad. I felt extremely bad and apologized to my professor on the spot without even thinking of defending myself. I took the penalty suspension of 2 months which automatically forfeit my entire first semester for my sophomore year. Those two months were filled with anxiety and depression and I almost dropped out of college because of it. Safe to say I committed myself to continue pursuing my course because I was very passionate about it, I powered through, and I'm currently in my last year as an undergraduate.
The field I want to get into usually requires a graduate degree and I was thinking of pursuing a master's degree after I graduate, however when I was looking up some applications some of them required that I mention and explain if I've ever had academic delinquency.
I was wondering if anyone has had a similar experience who took up a master's degree or knows something about this situation. How would you suggest I go about this?<issue_comment>username_1: How about "parent" and "child"? By keeping "parent", you are maintaining continuity with the terminology that others are familiar with, which is generally good for scientific communication. By using "child", the relationship is equally clear as "daughter". For those who are not familiar with the parent-daughter convention, parent-child is equally clear. For those who are used to parent-daughter, then it will be fairly clear that you are being explicitly gender-neutral without a significant change in meaning.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Sorry. I'm sensitive about such things and inclusivity has high value for me, but I don't see any issue in the "traditional" terms. They aren't pejorative. They aren't intended to be sexist. They don't exclude. They don't refer in any way to humans. It is just a metaphor and already well understood.
I see no issue with using "child", though it is a biological metaphor for something non-biological, just as "daughter" is. "Derived" might be accurate if you think you really need to be (and seem) "politically correct".
Using non-traditional terms in new papers should have a high bar for acceptance, since they can confuse people. The old papers on which a field is built suddenly become either less understandable or "suspect". It isn't my field, so not my place to set that bar, but it should be a consideration.
I'll note, for the record that it was progressives/liberals in my daughter's generation, many at women's colleges, that initiated the term "politically correct" for such overly "sensitive" thinking among their own peers. Yes, they understood sexism pretty clearly, but also understood that some things are just ... searching for a non-offensive word here and not coming up with it. Maybe "childish" works.
---
What does "inclusivity" even *mean* for transformations of matter?
Upvotes: 6 <issue_comment>username_3: As @wimi [already commented](https://academia.stackexchange.com/questions/191607/inclusive-language-alternatives-to-parent-daughter-for-inanimate-objects#comment517405_191607), there is a simple replacement, use the "parent and child" analogy as it is used in the programming world, instead of the "parent and daughter".
Regarding the issue that:
>
> "the 'parent' and 'child' often don't co-exist"
>
>
>
It was already an issue intrinsic in the 'parent' and 'daughter'. As <NAME> once said in his blog, make things as simple as possible, but not simpler.
If the parent and child analogy does not work, use initial phase and final phase, or whatever word will work (if you feel describing phase 1 and phase 2 as a burden).
Upvotes: 4 <issue_comment>username_4: **Frame challenge:** There is nothing "non-inclusive" about using the word daughter as a metaphor.
Your assumption that the "daughter" terminology is non-inclusive is grounded in an apparent belief that using any gendered word in a metaphorical context is "non-inclusive" to people whose gender was not referenced. This belief reminds me of a joke about a mother who gave a present of two ties to her son-in-law. On his next visit he wore one of them, thinking that would please her. Offended, she asked “What, you don’t like the other one?”
More seriously, I see several problems with this tendency to automatically classify as "non-inclusive" any non-literal mention of some but not all genders. First, you are ignoring the fact that many languages, collectively spoken by billions of people around the world, assign a gender to all nouns. (It may very well be the case that the use of "daughter" in the scientific context you are asking about can be traced back to one of those languages, see below.) Thus, this particular notion of non-inclusiveness strikes me as being very Anglocentric, to an extent that could even reasonably be perceived as offensive by speakers of some of those other languages.
Second, I find it problematic that the belief that something is non-inclusive cannot be argued with, since if you disagree, then by the self-reinforcing beliefs of some of the people who promote these kinds of concepts, you are by definition "against inclusion", a sexist, bigot, etc. Who wants to risk being labeled with such terms? In other words, some of the beliefs around "inclusive language" strike me as being part of "a closed system of logic" (a term that I believe was originally used as part of a set of criteria for recognizing cults, see [here](https://www.cultrecover.com/sites/default/files/pdfs/singers_6conditions.pdf)). When I see self-justifying logic of this type, I am immediately put on my guard, and would like to ask: forget vague terms like "non-inclusive" that anyone can define in a way that suits their preferred narrative; please explain, what exactly makes this particular use of language *bad*?
Third, let's consider the specific term "daughter" you are asking about. You've stated that the use of this term in the scientific context of your research area originates in terms like "daughter nucleus" in nuclear physics. The historical origin of that term is discussed [here](https://hsm.stackexchange.com/questions/13009/what-is-the-history-of-the-use-of-the-word-daughter-for-a-decay-product-in-nucle). Meitner and Frisch, the two discoverers of nuclear fission who apparently introduced the "daughter" terminology, were both Austrian-born scientists (i.e., speakers of a language with gendered nouns). You could say they were an "inclusive" team -- one of them was a woman, the other a man. Moreover, it looks like they borrowed their terms from an earlier accepted terminology in *biology* of referring to cells produced by other cells as "daughter cells" (with the original cell being termed the "mother cell"). That terminology, which is discussed [here](https://www.quora.com/Why-are-daughter-cells-not-known-as-son-cells), seems pretty logical, as it is actually based on a scientifically reasonable metaphor that only "female" things can produce their own offspring.
Given this more historically informed way of looking at things, is it still tenable to argue that "daughter phase" is non-inclusive? It seems to me that if you still support this premise, in order to be consistent you would also have to deem the etymologically related uses of "daughter nucleus" in nuclear physics and "daughter/mother cell" in biology as problematic and in need of complete overhaul. So basically you'd be rejecting much of the accepted terminology of multiple areas of science (as well as the German language, French, Spanish, etc). Well, perhaps some people would hold such extreme positions. But in my opinion, this way lies nothing but madness.
**Summary:** we should seek to be inclusive wherever possible, sure. But this issue simply isn't an example where that principle has any relevance.
Upvotes: 5 <issue_comment>username_5: For situations in which I wanted to discuss the transformation of a particle or system from a "parent" (the word I used) state into multiple subsequent states, which were potentially quite different in character, I used (instead of "daughter") the word "progeny." You might find that useful.
Upvotes: 3 <issue_comment>username_6: As a software engineer, I found myself in need of this metaphor many times when describing data structures and state transitions. I always used "parent" and "child", not because I didn't want to offend anyone, but because if I used "mother" and "daughter" some of my readers might wonder if I was planning to use "father" and "son" for some other relationship, and holding "parent" and "child" in reserve for the more general case.
Software engineers tend to overthink everything. However we needed to be careful and precise with language because from c. 1956 to c. 1984 there was very little common idiom in software. Most of these data structures and state transitions were being described for the first time.
Upvotes: 4 <issue_comment>username_7: Might "parent"/"offspring" avoid the negative connotations of "child"?
Otherwise, I have a vague feeling that "parent"/"daughter" is used for asexual reproduction, which seems to be the best metaphor here.
Upvotes: 1 <issue_comment>username_8: >
> The initial form has often traditionally been called the 'parent phase' and the transformed form has been called the 'daughter phase'.
>
>
>
In my experience, this is not true. Phases of matter are traditionally given much less helpful names. Some examples are:
* Phases of water ice are named with Roman numerals
* Phases of certain crystals are named with lower case Greek letters
* Crystal phases are named according to space group symbols, for which there are nine different conventions listed in Wikipedia
They are also names like solid, liquid, supercritical, glass, etc.
"parent/daughter" is rather non-specific, and in my experience, non-traditional. It would be better to use one of the more traditional names.
Upvotes: 1 <issue_comment>username_9: A lot of good suggestions already, here is a simple terminology from chemistry:
**educt - product**
The educt is a material before a chemical reaction, the product what comes out of it. This seems to hit the nail of what your first paragraph describes. It is short and devoid of biological connotation.
Upvotes: 2 <issue_comment>username_10: Root/Trunk, Branch, Leaf
------------------------
Using a tree analogy takes away any connection to parents and children, etc. You'd use "Root" (or possibly "Trunk") to refer to the oldest ancestor. Any children of the root are "Branches", which may have their own Branches, etc. Children without any Branches are called "Leaves".
This terminology is often used in computer data structures, but can be applied in other circumstances as well. One thing it also gets around is the issue that the "Parent/Child" analogy is somewhat inaccurate, in that in reality, a child has not 1 but 2 parents (assuming normal sexual reproduction as opposed to mitosis or something), so the analogy breaks down when you consider that. (Yes, a tree has multiple branching roots as well, but we generally restrict the analogy to the part of the tree that's above ground.)
Only problem with this is applying to the original example of atomic nuclei, as it implies that the elements are all still *connected* to each other as they would be on an actual tree, but without knowing what specific case you're talking about it's hard to say whether this would be a good or bad thing.
Upvotes: 2 <issue_comment>username_11: How about any of the following pairs?
* input, output
* source, destination
* initial, final
* previous, next
* old, new
Upvotes: 1
|
2022/12/14
| 845
| 3,902
|
<issue_start>username_0: I am trying to understand better the factors that affect the decisions of journal editors (re: publication). I am not interested in strategies to boost my chances of publication, rather I want to understand explicit policies and objectives that incentivize publication choices. Since policies differ between journals and regions, I am interested in *examples* of how editors' objectives translate into policy in western European/American-based journals, e.g.:
* Is the goal of an editor to maximize impact factor? If so, are editors who facilitate publication of work that becomes highly cited recognized or awarded in some way, or are other strategies used by the editorial board to pursue this goal?
* Is the goal of an editor to have a large throughput? If so, are editors subject to throughput quotas? If not, does this create tension in journals with APC's that have budgets to meet?
* Do the motivations differ between editors and publishers? If so, what kind of journal policies result from these tensions?<issue_comment>username_1: The personal motivation for becoming an editor are based on prestige and service to the community.
The primary goals of a journal depend on the publisher. Learned societies including e.g. ACM and IEEE in engineering or the Royal Society are intent on the spread of science and engineering, help their members e.g. by providing publication outlets, and their own prestige, whereas commercial established editors look for revenue by establishing a journal as a brand. Some newer editors forego the establishment of a brand.
Publishing can be extremely profitable as university libraries have to provide access to publications. With the exception of predatory publishers and commercial editors who maximize short-term profit, the goal is to have a product that is prestigious enough that their main customers, i.e. university libraries, just have to buy their product.
To build a brand, a journal needs to contain sought-after articles. A good journal enjoys a virtuous circles. By having already published good articles, the journal attracts authors with good articles who prefer the name recognition of the journal to other outlets. This makes it more likely for libraries to subscribe to the journal, which gives it also more recognition among potential authors. By specialization, a new journal might find both an audience and a group of authors.
In the short term, an editor has to fill the journal with good quality articles justifying the subscription fees. Over the last decades, the costs of adding one page to a journal has gone down, which takes away one big reason for limiting the size and number of articles published. However, a journal that accepts just about everything will lose prestige and no longer attract good authors. The quality of editing and reviewing also enters the picture. Bad reviewing and having articles retracted as a consequence is a big loss of prestige.
In the last decades, informal renomée has been replaced with bibliometric measures such as impact factor. These are more important for the less prestigious journals and are also the target for "gaming the system".
Upvotes: 2 <issue_comment>username_2: I can add my two cents as an associate editor of some journal: In this specific journal there are *no* objectives of the editor-in-chief that are communicated to the associate editors (really,none at all). Hence, I think that all associate editors follow their own incentives. Speaking just for myself, I can say that I accept any paper for which the topic fits the scope of the journal and for which the reviewers recommend acceptance. There are no other factors involved.
I should add that I am associate editor only for a short time and haven't participated in any editorial board meetings. It may well be that the board discusses objectives of the journal at such meetings…
Upvotes: 2
|
2022/12/15
| 951
| 4,274
|
<issue_start>username_0: In software development, there are many development methodologies with a focus on speed/efficiency: Agile, Scrum, DevOps, Kanban etc.
Are there similar methodologies in scientific research? (something like "Agile Research")
A couple of related ideas that come to mind are: [design of experiments](https://en.wikipedia.org/wiki/Design_of_experiments) (how to vary independent variables for optimum statistical inference and how to sequence experiments) and [MLOps](https://www.databricks.com/glossary/mlops) (similar to DevOps but in machine learning). While those address some aspects, they aren't a complete methodology for doing research.<issue_comment>username_1: No. Not that I am aware of. Here's a couple of things.
Generally speaking, academic research is relatively independent when compared to a product dev/delivery cycle. Agile methodology "pays better" in more complex group workstreams. So the necessity of codifying practices in my PhD program is muted. I'm sure there are larger academic research projects that would definitely benefit from more rigor in the process, I feel that's the exception.
At the risk of being controversial, agile is business first and methodology second. The training costs money; the trainers get the money. Academic groups don't line up to spend that money because it's not required. Additionally, there is less sell pressure from the agile trainers, for whatever reason. Perhaps this is a good business opportunity?
Upvotes: 2 <issue_comment>username_2: As someone who has used, taught, and published things about agile software process, I can give you some ideas. I don't know about tested successful existing practice for applying the ideas to research, however.
First, the *practices* that agile developers use are mostly not applicable to research, though consistent "pairing" probably is. You really need to go back to the *principles* from which agile was developed to find appropriate principles and practices. For example, it is unlikely that a research team has a "customer (product owner)" in the sense of an agile software project, unless, perhaps, for product research. So, I'll assume you are interested in basic research, leading to an opening of new ideas and theories. I'll also assume, as in agile software development that you are interested in the work of teams.
The goal, all along was to create high quality software in a flexible, changing, environment using continuous feedback on the product as it developed. There was also a reluctance to guess about the needs and to therefore need to make corrections due to bad guesses, though enable changes due to changing conditions. How do you hit a moving target effectively and (relatively) efficiently?
The principles, often stated, are: We value
* individuals and interactions over processes and tools,
* working software over comprehensive documentation,
* customer collaboration over contract negotiation,
* responding to change over following a plan.
As you can see even these principles need some restatement for research. These were devised by experienced developers to guide the creation of the actual practices.
But note that research, being an attempt to look into the unknown, has some of the same underlying characteristics. It isn't so much that the target moves, but that it is unknown, so our view of it at any moment might change.
So, what would agile research principles be? What would researchers value?
Let me suggest a few, though I can't claim to be either complete or especially near the mark. This requires some group thinking to come to a consensus. It would also require being put to the test.
But, I suggest that we value:
* collaboration over individual work,
* insight over individual results,
* flexibility over established process,
* (questionable) small projects over big,
* peer feedback over holding details back.
And note, importantly, that these aren't binary choices. It isn't that individual work isn't valued, it is just a matter of shifting balance. And, the simple statements above need a lot of expansion and elaboration.
I'm not sure I've got it right yet, and won't on my own, but it might give you a starting framework that you could use to develop such a framework.
Upvotes: 3
|
2022/12/15
| 6,023
| 25,015
|
<issue_start>username_0: For many years one of the main problems for academic essays and BA/MA/PhD theses was plagiarism. However with the [recent advent of artificial intelligence (AI) that can write high quality texts](https://www.nature.com/articles/d41586-022-04397-7) (e.g. ChatGPT) a fresh concern has risen: was a text written by a student or by an AI?
Plagiarism scanners do not detect AI texts (I tried it!) because they are not copied and pasted. Even if you have a certain suspicion that a text might be written by an AI (e.g. because an overall direction in the text is missing or the text is just too good to be true given the student's previous essays) there is no real way to prove the suspicion.
How to deal with this new problem? Yesterday we had a faculty meeting where we discussed this issue of AI texts but no real conclusion - so I was thinking of getting some input from here ...<issue_comment>username_1: While the idea of ChatGPT writing entire MA/PhD theses is certainly entertaining, right now the technology just isn't at that level. It can write multiple paragraphs at a time, and maybe a human can manually piece those together into a proper "paper", but it's limited in what the human can feed in. It's really a glorified chat bot, and it's not designed to write essays for students. For example, there's no supported way you can feed in an article/story and have ChatGPT respond to it. Moreover, there are simply limits to ChatGPT's semantic understanding.
In the future I suppose some companies might make a 'nefarious' version of ChatGPT and monetize it, considering all the various companies we already have that monetize academic dishonesty. When the technology is more at that level, hopefully there will be new tools on the detection side that can deal with the issue.
Upvotes: 4 <issue_comment>username_2: At the moment, and as a minimum, you need a clear policy on its use, perhaps forbidding its use. That isn't enough, of course, but you need to make it clear. That will ameliorate the problem slightly, as most students will comply if the policy is stated in a reasonable way that emphasizes the learning goals. The reasoning behind a policy needs to be made as clear as the policy itself (as usual).
Mid term, it is likely that AI solutions will emerge that can catch the use of such things with fair accuracy, maybe even good accuracy. Some are already underway. The might even provide a good balance between false positives and false negatives. The former might be handled if students were always subject to a follow up oral presentation of essays and such as is done with theses.
I think that the development of a detection tool is a worthwhile AI research project at the moment.
Long term, it is harder. I don't think (but am not certain) that ChatGpt in particular tries to obfuscate its use, but that can happen. At some point, we may just need to completely change the techniques we use to encourage and evaluate student honest work. It is worth spending some time with that now, and trying out ideas. Oral exams don't scale well, but are harder to misuse.
At the moment the AI text generation isn't very creative and a careful reading might catch a lot of it - especially for longer texts. But AI generated work and poor but honest work might be harder to distinguish.
As I understand it, ChatGpT doesn't have access to the internet (say, Wikipedia). That would change the game, perhaps, but might also make plagiarism detection easier.
---
I'll note that forbidding its use might not be the only proper policy. Use with citation might be considered as you develop such a policy.
Upvotes: 3 <issue_comment>username_3: One general approach is to make sure the assignment is not **bullshitable**.
ChatGPT generates [bullshit](https://en.wikipedia.org/wiki/On_Bullshit), i.e. text that attempts to resemble genuine responses to the prompt, but does not attempt to actually be correct. The AI does not generally understand what it means to be correct, it just produces relevant text (relevant text is often correct, but often incorrect).
Unfortunately, many students use a similar process, so it can be hard to distinguish. So my suggestion is to design prompts where bullshit results in a bad grade (and is easy for you to detect), even if an automated tool cannot detect cheating.
* [ChatGPT uses nonexistent, made-up references](https://news.ycombinator.com/item?id=33841672). Ask students to submit references in a format that can be easily spot-checked.
* Ask students to provide drafts of their work, incomplete versions showing their thought process, and maybe errors they made on the way.
* Avoid open-ended prompts like "write X words about Y".
* Prefer prompts with a *global* goal and require each *local* part of the essay to contribute to that goal. Take off points in the rubric for parts of the essay that do not clearly support the global goal. I.e. take off points for rambling or unnecessary material. Take points off for good material if it is not shown how that material contributes to the global goal.
* Do not prompt for content for content's sake. Instead of a minimum word count, give a maximum word limit and require students to accomplish a definable goal within that limited space.
---
Caveat: I haven't tried this, my background is more in AI and less in essay assignments. Also, this answer is focused on essays because this kind of cheating on a longer text, like a thesis, should be much easier to detect.
Example reference with similar opinion: [ChatGPT Is Dumber Than You Think](https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/) (The Atlantic)
Upvotes: 8 [selected_answer]<issue_comment>username_4: One might want to return to writing in-person essays. This would only apply in situations where the length and time involved made that appropriate, but for certain assignments (not theses!) one could imagine either taking class time for this, or proctoring in some fashion. Particularly as (current!) AI seems to "excel" with fairly short-form material, giving people 20 minutes to write such a short assignment in person may be a viable way to deal with it.
(That said, I like username_3's answer better, and have used it for many years for certain longer assignments.)
Upvotes: 3 <issue_comment>username_5: ### You ask the writer about the text
Frame change. You are not worried about computers writing papers. You are worried about people falsely claiming to be writing papers.
If a human writes a paper for another human you have the same problem. And the same *solution*: ask the writer or interview them about the text.
A human that generates parts or entire papers automatically will have a very hard time explaining said parts.
Upvotes: 6 <issue_comment>username_6: Sort of an extension to [username_5's answer](https://academia.stackexchange.com/a/191678/): consider an oral exam. It's not going to be ideal since oral exams have their own set of problems, but ChatGPT can't fake an oral exam, and neither can Chegg.
You could also use an oral exam as a check to see if the student wrote the work. It won't prove an AI wrote the essay, but it could indicate the student did not write it.
Upvotes: 3 <issue_comment>username_7: The [International Baccalaureate](https://en.wikipedia.org/wiki/IB_Diploma_Programme) (IB) is an international high school school curriculum used by many international high schools. Although it is for high school, one part of it can provide insight to the OP: the "[Extended Essay](https://www.ibo.org/programmes/diploma-programme/curriculum/extended-essay/what-is-the-extended-essay/)" (EE), which is basically a 4,000 word essay.
For every student, the EE research and writing process includes three formal 20-30 minute "reflection sessions", which are interviews between the student and teacher, the purposes of which are to support the student's learning and progress, and to *check for authenticity*, i.e. to check if the essay was actually written by the student.
* In the first reflection session, they discuss the student's proposed research question.
* In the second reflection session, they discuss the student's first draft.
* In the third reflection session, they discuss the student's second (and final) version.
This system has been used by the IB for a long time, and it seems to work, as far as checking for authenticity. Perhaps a modified system could be considered for university.
Upvotes: 3 <issue_comment>username_8: Ask the AI questions about the subject of the assignment. If you ask enough questions, you will gain a good idea of the AI's limitations, and also a good idea of the AI's writing style.
Then in the assignment, require the student to answer a question that the AI was unable to answer.
The answers you receive from the students will show the AI's limitations and style if they are using the AI, otherwise they will at the very least be the work of a live human being.
Upvotes: 0 <issue_comment>username_9: I want to raise three points to guide the discussion, but they're too long for a comment, so an answer it is:
1. **We should** assume NLP models will continue becoming 1. better and 2. widespread. Consequence of (1): it will eventually become impossible for the well-trained human eye to sense that an essay was written by a machine. Consequence of (2): the solution of a "textual fingerprint" for identifying an auto-generated text, as mentioned above, won't be feasible. Currently, this could still work, because there is only one company offering one model for generating ChatGPT-quality text, namely OpenAI itself. Fast-forward 2 years, and there will be *many* such companies, perhaps even explicitly aimed towards high-schoolers. In that case, a teacher would have to go down the list verifying the fingerprint with each service, and such a service would be foolish (in their own market interest) to offer such a service in the first place.
2. **We should** view this in the broader and *existing* context of ghost-writing. Some take solace in being able to discern the "bullshittery level" of ChatGPT, but this is tackling the wrong problem. For decades, students have paid others to write assignments for them, and obviously, the task of detecting those texts has nothing to do with detecting that the resulting text was human-generated, because it was -- just not by the right human.
3. Some have claimed that something like a thesis cannot be generated by a machine, because they "cannot be creative". **We should refrain** from these kinds of statements, because 1. [even search algorithms](https://en.wikipedia.org/wiki/Evolved_antenna) can be creative, and 2. there are, sadly, many disciplines in academia where innovation hardly exists and [bullshittery is the entire game](https://xkcd.com/451/). Ask ChatGPT to write an opinion piece, and it'll give you an essay at high-schooler level. Then ask it to *do it in an academic style with complicated words*, and you've entered the domain of master's theses (or higher!) in some domains.
Perhaps the take-home opinion essay is just dead, and the future consists of technical reports and intensively researched term papers. Perhaps a fingerprinting system will be the future, but it will be the students whose capacities will be fingerprinted and tracked through time, not the texts.
Upvotes: 4 <issue_comment>username_10: One IT website has called ChatGPT “Dunning-Kruger as a service”. It creates very convincing bullshit.
It’s very annoying and you might need to train people up to detect it (the convincing nonsense, not that it was created by an AI), but for a while at least you can judge these submissions just by their quality.
Upvotes: 1 <issue_comment>username_11: An interesting idea on this topic comes from <NAME> at Stratechery: instead of banning the use of AIs, *require* it - the students' job is not to produce an essay, instead they have to check and correct what the AI says about the topic.
A quote from [his article](https://stratechery.com/2022/ai-homework/):
>
> Imagine that a school acquires an AI software suite that students are expected to use for their answers about Hobbes or anything else; every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system. Moreover, instead of futilely demanding that students write essays themselves, teachers insist on AI. Here’s the thing, though: the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.
>
>
>
Ben argues that the skills required to check and correct the unreliable AI's output are more relevant in the modern world than the skills required to write an essay in the first place. Perhaps as a bonus, the students will also put 2 and 2 together to realise that if they have an essay to submit to a different teacher, they can't expect ChatGPT to write a decent essay for them.
Upvotes: 5 <issue_comment>username_12: The question points to an underlying development that was identified by <NAME>: The continuing application of the [*No True Scotsman*](https://en.wikipedia.org/wiki/No_true_Scotsman) argument to intelligence. What we consider "truly" intelligent has changed with the advancing capabilities of machines, on the grounds that a task that can be mechanized is by definition not a sign of "true" intelligence. Far into the 1900s, playing world class chess or being able to translate reasonably well between a dozen languages would have been considered a sign of the highest intelligence. So would have been the ability to write reasoned essays in college grade English about almost any topic known to mankind (which is what ChatGPT does, of course).
The progress made in information technology shows us that all these tasks can be done by *mechanisms.* Nobody in their right mind, apart from the occasional excited google engineer, would claim that these mechanisms are "truly" intelligent. Because we don't think of ourselves as mechanisms, we backtrack and change our classification of what we consider "truly" intelligent. Everything that is rule-based is obviously not truly intelligent: You can beat it with stupid brute force. Everything that is simply based on pattern recognition is not truly intelligent: There is no true "understanding" and no "originality". But alas: The texts are good enough to [earn certificates](https://twitter.com/stephanemaarek/status/1600864604220964871) and [pass German college term tests](https://youtu.be/SS24iTTOn7o). *And this is just a prototype.*
There are a couple conclusions here. We can **either** backtrack further and say:
1. Much, if not most of what we do in academia is not "truly" intelligent. The amount of original, creative, essentially *unpredictable* work is small.
2. Much of what we do professionally (and what college education prepares us for) is not truly intelligent work. Programmers, radiologists, lawyers: Just pattern recognition and -application.
3. Our education and our professions are on the brink of being obsolete.
This is scary.
**Or** we hold our ground and continue to consider our education and professions at least somewhat intelligent. Then we cannot deny that we have produced intelligent machines. The google engineer was right. Rather sooner than later machines will be able to do *any* intellectual task we can do, including the ones one might currently consider original, creative, and essentially unpredictable. They will probably be able to perform them better than we do. In fact, they will probably become able to perform intellectual tasks that are entirely beyond our regular reach.1
This is even more scary.
My guess is that we will take a much bigger step back than ever before: We will redefine what it means not to be *intelligent* but to be *human.* We will be forced to realize that **intelligence is not what defines us as human.** It is probably not even art that defines us, or only insofar as art defines us as individuals.2 Instead, it is emotions: Love, compassion, passion, even hate. Machines are unable to feel and will be unable to feel for the foreseeable future.
P.S. Are you still waiting for an answer to your question?
1. Using ChatGPT is not any more plagiarism than using a pocket calculator. If it gets you results, it's a useful tool.
2. Therefore, don't be a <NAME>. Instead of a futile attempt at preventing the use of ChatGPT e.a., embrace it. Kaya3 wrote an answer in this direction. The future of academia and humanity lies not in defending the indefensible but in employing the useful.
3. Change the curriculum to stay relevant. Nobody teaches how to manually draw a root in algebra any longer, or matrix tricks. Try to teach things which may be hard for AI even in 20 years.
---
1 I'm not necessarily hinting at the prospect of a [technological singularity](https://en.wikipedia.org/wiki/Technological_singularity) which often involves an unpleasant quasi-religious sentiment; a much weaker development would suffice: Machines continue to improve on cognitive tasks (like recognizing cancer cells, designing mechanical things, predicting the weather, making investment decisions, driving a car). We increasingly find that they do it better than we typically do, and we increasingly rely on them. This is a gradual development without tipping points of any kind. (It is funny that the proponents of a singularity recognize that technological development is exponential but fail to see that exponential curves are emphatically void of singularities; quite to the contrary: They look the same everywhere. The discovery of fire, advent of agriculture or the industrial revolution have disrupted societies much more than a mechanical lawyer or programmer ever could.)
2 As today, different individuals would produce different art. This would include mechanical individuals, i.e. different neural nets, or differently trained neural nets (the equivalent of separately raised identical twins). Like today, experts (including, of course, mechanical experts) would be able to make an educated guess which individual (including, of course, mechanical individuals) created a given piece of art, or at least which tribe and era it is from (e.g. 17th century Flemish, 19th century Xhosa, or 202x Dall E3 lineage).
Upvotes: 3 <issue_comment>username_13: Perhaps a direct inquiry to the student, such as "did you write this yourself" would provide the desired response.
Upvotes: 0 <issue_comment>username_14: In a few years, when current students enter the workforce, it seems likely they will be using AI-based language tools like they are using Google today. They will be using these tools to access existing knowledge, to polish the presentation of their results, to translate text that they have written in their native tongue into the language they are supposed to work in, or maybe even as research assistants that contribute genuinely new results. Their tools will likely be vastly superior to what is available now. I would argue we should prepare our students for that future.
With that in mind, I think a scalable and relatively future-proof way of dealing with AI assistance is to *supply an AI-generated response to a homework assignment as part of the assignment*. The instructor would set an essay assignment as usual, but in an additional step they would *also* use the best model they can get their hands on to auto-generate a response, maybe in an iterative fashion where the model first generates an essay outline and then fills in the chapters. They would then add the process they used to generate the essay to the methods section of this example essay. The students will then be graded on whether and by how much they managed to *improve* on this baseline. A brief critique and grading of the baseline could maybe be included by the teacher as well to show the students in what ways the AI-generated text is still failing.
In this way, the topic of AI assistance can be openly discussed in class; students get naturally exposed and can discuss different ways to use AI; they can discuss whatever limitations current AI still has; they learn about attributing credit; and they have to learn and ultimately demonstrate how to do better than the output of a current state of the art public model. Compared to schemes that rely on oral examinations, additional teacher workload should also be relatively low, as the baseline essay they will be producing will be produced in a highly automated way.
Upvotes: 2 <issue_comment>username_15: Going along with username_8's answer, perhaps this should be seen as an opportunity rather than a threat. There are some reflections here about how schools used to view Wikipedia and similar sites. While there is certainly backlash against the increasing use of AI, unscrupulous people are already seeking to advance its use past the point where we can tell it is used. News articles, social media posts, forum messages, and even telephone calls are already hosted by AI bots.
Maybe have students generate an article or essay, and run them through the process of proofreading, fact-checking, and editing. Changes should be tracked along the way (a feature that's part of any decent word processor these days), and the students should be prepared to defend their actions and choices. This could also be a collaborative assignment.
Along the way, the kids may notice peculiarities and biases that we might not, and thus become better critics of AI-generated copy than we could hope to. This should also help inform them on the topics of misinformation and deception.
Even if AI is not widely adopted, academia should always approach these sort of matters thoroughly, rather than immediately regarding them as a challenge to tradition.
Upvotes: 0 <issue_comment>username_16: Just one small aspect: For student papers require the bibliography to also include YOUR library's call numbers for books and the addition of the link used for online access of journal articles or the call number if accessed print. This is also good for ghostwritten work, as it makes the ghostwriter have to use your library, upping the cost. ChatGPT will just write bullshit, or forget to add this. I know this is not standard for publications, but for training writers you get them to show the library work they (supposedly) did.
Upvotes: 1 <issue_comment>username_6: (I'm adding another answer because it's fundamentally different from the other one I wrote)
There are tools available now to detect AI-written text. One way is to get another AI to classify the text; the other way is to add a watermark to the AI-generated text. Watermarking is more sophisticated than physical watermarking; it modifies the output text (by the original text-generating AI) in a way that is detectable by computers, but not by humans.
See sources for [classifiers](https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text) and [watermarking](https://arxiv.org/abs/2301.10226) respectively.
Upvotes: 0 <issue_comment>username_17: Students are expected to understand grammar and punctuations but grammarly is allowed and even recommended by many tutors.... students can use grammarly to improve the quality of writing. If grammarly was embraced why shouldn't other AI advancements be embraced. If 20 student's out of 100 use chatgtp to answer the same question won't turnitin flag some of them for plagiarism? Even when you use chatgtp you still have to read and edit the output ... you have to assume the answer you are getting has not been submitted elsewhere... It cannot write 15000 words dissertation or even 5000 words essay asking the student to analyze a case study... The student will need to ask several questions to get a good word count and also proof read and also format references.... At masters level Chatgtp is just like advanced Google .... It gives you the information you need instead of having to search multiple websites... But you still have to study the output and make it align with your intentions... The output will not always be original as more people use it and as more students submit to turnitin database or safe assign... Schools should wait before implementing any policy... Despite turnitins claim students are able to beat it using paraphrasing softwares... Turnitin is just another company trying to sell it's services... Institutions should not always jump at every service or product or update offered by the company.... PowerPoint slides with narration cannot be done by chatgtp... Student input is still needed... The process of creating the slide is the learning process it makes no difference whether the answer was provided by chatgtp Google or school library...
Upvotes: -1
|
2022/12/15
| 425
| 1,759
|
<issue_start>username_0: It is becoming increasingly important to keep track of performance, do scientists have a good figure of merit to measure their performance?<issue_comment>username_1: There is no single measurable criteria by which one can easily evaluate the researcher performance.
Some quasi-efficient metrics include:
1. publications - but it is relatively easy to produce a large number of low-quality papers (salami slicing, guest authorship, incremental research, unnecessary duplication of studies, paid-for publications, etc)
2. publications in top-quality journals - better, but leads to massive bias towards well-funded groups and neglects good research from low-funded countries, individuals and research areas. Some very excellent researchers may decide not to go through peer-review system and publish their results only on preprint platforms like arXiv
3. publications and citations (e.g. h-index) - better, but can be manipulated through excessive publishing and self-citations
There are other metrics, but none is perfect. Generally speaking, research is about the study of unknown, and you can't put a good KPI for this sort of activity, because you can't reliably measure what you don't know.
Upvotes: 1 <issue_comment>username_2: Since this is an academic forum, the top of the heap, elite scientists measure their own performance through their students performance and success.
More generally, esp for non teaching academics, see Dmitry's answer.
Upvotes: 0 <issue_comment>username_3: The h-index is the most common metric, but as this article shows, it can have [counterintuitive behavior](https://asistdl.onlinelibrary.wiley.com/doi/abs/10.1002/asi.21678). Therefore, there is no good KPI for scientific careers.
Upvotes: -1
|
2022/12/15
| 752
| 2,887
|
<issue_start>username_0: I am unsure about how to capitalize the title of my dissertation. In my case, I am wondering about "testing" and "empirical":
>
> Four Essays on the empirical testing of the Efficiency Hypothesis
>
>
>
Should I capitalize gerunds?<issue_comment>username_1: I'm not a language maven, but I'd think that both Empirical and Testing should be capitalized. They seem to be especially important words in the title - the essence of it, actually. The work is about empirical testing, after all.
Some might ask you to capitalize all words in a title, even articles.
In a thesis, your advisor should have good advice.
Upvotes: 1 <issue_comment>username_2: The quickest solution (for me, at least), is to type in your title at <https://capitalizemytitle.com>. It gives correct title capitalization in many formal styles. And it has detailed information to explain the rules it follows.
Style guides vary, but generally, any word of any grammatical type that is four or five letters or longer is capitalized. (Different style guides differ on the four- or five-letter cut-off.)
For just about all style guides, "Testing" and "Empirical" are definitely capitalized.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I think there are three plausible options.
>
> Four Essays on the Empirical Testing of the Efficiency Hypothesis
>
>
>
This is [title case](https://en.wikipedia.org/wiki/Title_case): you capitalise almost all words. The only words not capitalised will be things like articles, conjunctions and short prepositions (and perhaps some other words where a lower case initial letter is significant, e.g. "E. coli"). Exact rules vary, but "on", "of" and "the" would be lower case in all major versions.
>
> Four essays on the empirical testing of the efficiency hypothesis
>
>
>
This is (normal) sentence case. In some disciplines, the titles of articles are usually in sentence case, and IMO it makes sense to do the same for theses in those disciplines.
>
> Four essays on the empirical testing of the Efficiency Hypothesis
>
>
>
This is still sentence case, but regarding Efficiency Hypothesis as a proper noun. If you capitalise it when it appears in the text of your thesis, obviously you should also capitalise it in the title.
In particular, capitalising "Essays" but not "empirical testing" would be inconsistent.
Upvotes: 3 <issue_comment>username_4: >
> the Efficiency Hypothesis
>
>
>
I doubt that's a good choice of phrase for a PhD title, regardless of capitalization. "efficiency" comes up everywhere. This is super-vague. At least somethin like "Smith's Efficiency Hypothesis" or "The [process name] efficiency Hypothesis" if that's the process you want to make efficient. Remember, the PhD title will be read not just by people in your specific subfield, but by people with very diverse backgrounds.
Upvotes: 0
|
2022/12/15
| 897
| 3,849
|
<issue_start>username_0: I am writing a paper examining performance among female athletes in a particular, less-well-documented sport over a certain time period. Unexpectedly, I have discovered a paper addressing very similar questions to those in my paper, but about (exclusively) male athletes in the same sport.
The earlier paper is, in my view, very readable. The structure is easy to follow; the (potentially difficult) choices about which tables and results to present (and which to omit!) is also well done. In other words, the earlier paper could provide a great model/template for my paper on women athletes.
If I were to closely follow the structure and choices of the earlier paper, a reader of both my paper and the earlier one would find it easy to make comparisons. My question is how that kind of structure-copying would be viewed by an editor. Plagiarism? ... or just a sensible (homage) choice?<issue_comment>username_1: Plagiarism is using someone other's work, ideas, formulations, etc in a substantial way without attribution.
In your case, you now have a paper that repeats essentially the work of someone using a different data set. If your conclusions are interesting, then the paper as such is publishable. Just make clear what the connections between the two papers are. Following the same structure has actual benefits because it makes a comparison section easier to do.
The structure of scientific papers in a subfield often follows a given pattern that cannot be attributed to anyone. If you go much beyond this, you can just insert a phrase as "following the lead of X [x], we first do this, than that, and then we analyze using this and that". Since plagiarism has an extensive gray zone, it is hard to give general answers.
Upvotes: 3 <issue_comment>username_2: There are a few things to consider when checking whether something constitutes plagiarism.
1. Ethics: Are you presenting someone else's work as your own or using it without proper attribution?
In this case, I think you're well in the clear. You can rest comfortably knowing that you have created the information entirely on your own, and any similarities to other works are coincidental.
2. Optics: Even if you've created the work entirely yourself, will others recognise that? Do you need to take steps to avoid the false suspicion of plagiarism?
The ethical question is the more pressing, but we can't discount the pragmatic reality. You knowing you've worked honestly won't be much help if someone gets it in their head that you've plagiarised something, so sometimes we might make sure to keep away from any false suspicion. In this case, an excellent question to ask is whether this information could be presented otherwise, or if this is simply the most reasonable/efficient method of doing so. Many people have used "2+3=5" or "In this paper, I will demonstrate that...", but that doesn't make the usage of such plagiarism. And, to extend the analogy, structuring a paper in a particular way will likely have been done before. (I'd be more surprised if it *hadn't*!) Structural similarities will, barring any other suspicions or dodgy scenarios, be dismissed as simply a necessary part of the process.
3. Support: Is such a similar work actually a *benefit* to your work's credibility?
If you've done work and come up with a similar (or in fact the same) result as other people, that's likely a good thing! It not only gives your work a boost in credibility right off the bat, but it also helps the former work, as now it's being verified or corroborated.
So, in summary, I think that it would be ridiculous for anyone to claim plagiarism on your work, just because another paper about nearly the same subject had a very similar general structure. You seem to be in the clear both ethically and pragmatically.
Upvotes: 2 [selected_answer]
|
2022/12/16
| 429
| 1,791
|
<issue_start>username_0: I'm interested in pursuing a PhD in clinical psychology, but there aren't any professors doing clinical psych research where I'm going to school. I'm at a top university, and I'd like to get into a prestigious PhD program. I don't know how to set myself up for that though, trying to reconcile what I'm interested in, what my school supports, and what's hot in the field right now.<issue_comment>username_1: Firstly, it is normal that no one at the university has the same interest as you. However, it does not mean that they cannot supervise you on the topic that you are interested. You have to ask them first.
Secondly, I used to think that I should only do (undergraduate) research in the field that I liked the most in order to prepare for graduate application. I was wrong. If you want to become a good clinical psychology researcher, you need to become a good researcher **first**. You need to know how to do research; philosophical stuff; how to cite papers; how to read papers; how to do literature review; critical thinking; etc.
That is why any research experience is immensely useful. As a pre-PhD student, the domain knowledge that you learn from a particular research project may not be as important as the research experience - which will be useful for a lifetime and not limited to any discipline/topic. Good luck!
Upvotes: 2 <issue_comment>username_2: Other than what others in the thread have said, you can always approach professors from other universities and talk to them about what you're interested in, and that you're willing to research for free, this will 1) give you experience and 2) give you a connection and a possible recommendation. Basically, it doesn't have to come from the same university where you got your degree.
Upvotes: -1
|
2022/12/16
| 1,334
| 5,615
|
<issue_start>username_0: Long story short, in September 2021, I was hired as an associate lecturer at a British university. I was headhunted for the position. Since I was about to finish my PhD and I needed some experience, I decided to put myself forward.
I covered for a colleague who was on research leave. I became the interim coordinator of the [MA](https://en.wikipedia.org/wiki/Master_of_Arts) programme and took responsibility for all the teaching, MA supervision, and administrative work.
The Department repeatedly asked me for many favours (covering for colleagues on sick leave, taking on marking, etc.), usually at very short notice. I accepted, even at the expense of my mental health. I delivered. The students were very happy with my performance and both the head of the department and my colleague were complimentary to me.
So, in November 2022, the department decides to advertise a permanent position. My only colleague strongly encouraged me to apply. It was basically to do the same job I had been doing but permanently. I listened to my colleague and went for it.
A few weeks later, I was invited to interview. Both my colleague and the head of the department were in the panel. I attended the interview and answered all their questions. They said they were really impressed. However, after days of silence and deliberation, they decided to offer the job to someone else.
My colleague got in touch with me shortly afterwards to give me 'a massive thank you' and told me that the decision they had to make was very difficult. Apparently, I was a very close 'second'. I now feel upset and disappointed. I kept the programme running when no one else was available.
I've applied for jobs in the past. Sometimes, I made it to the interview stage. I've dealt with rejection before (it's part of the process), but the feeling here is quite different. I feel I've been let down. Am I being unreasonable?<issue_comment>username_1: You needed experience, you got experience. Sounds like a successful venture to me. You should probably make sure that experience is well represented in your application portfolio, hammering it home by lining up solid recommendations, and then continue applying for jobs.
Be wary of letting your disappointment trick you into burning your bridges.
Upvotes: 4 <issue_comment>username_2: This is quite common for UK Universities. They are under-resourced and run off staff goodwill. Covering for each other is a good strategy for established staff, but early career academics on precarious posts often find this exchange a little bit one-sided. You are right to feel let down. Many of your colleagues feel or felt the same at some point.
I am sorry you find yourself in this place. You are not alone.
Upvotes: 4 <issue_comment>username_3: You were hired in an interim basis to fulfil a job and you were paid to deliver. From the description you delivered well and even exceeded the job expectations. It is difficult to judge what is right or wrong about 'asked favours'. I would say it is inappropriate to expect that favours will ensure you getting a position that was transparently advertised and, hopefully, transparently granted.
How can you deal with it?
1. Make sure you get a comprehensive letter of recommendation that includes all your work delivered and even highlights the extra work you have done.
2. Remain on good terms with your superiors, finish your work professionally and let them know that you might provide their contact details as references for future job applications.
3. If you do not want to 'feel' exploited, set the boundaries of your job and please, do not compromise your mental health ever. Instead, identify tools to better deal with such situations. Whatever you can learn from your current employment will enrich your experience and prepare you better for future situations.
4. If you deliver 'extra' work, you might be entitled to ask for additional benefits such as a training course, etc. leading to a win-win situation.
Upvotes: 3 <issue_comment>username_4: I think that in the recent past your having filled in with aplomb would make you a shoe-in for that permanent vacancy.
But today there are so many candidates for each permanent vacancy that hiring departments can always go for the "fresh veg".
You are naturally aggrieved. While no *quid pro quo* was spelled out in relation to the permanent job, you did **everything** you could to show your loyalty and flexibility. What's more, you applied for the job after a specific request from a colleague acting on behalf of the department.
Ethically it's the equivalent of a breach-of-promise in the days of our grandparents. But you would be unfair to yourself to brood too much on these kind of people. You have to keep positive for the next opportunity and give the benefit of the doubt to the next potential employer. But you must also learn to protect your own emotional investment in a job and ensure you get what is merited from such commitment. Precisely *how* to do this is your own call but I imagine very frank conversations must be had before any effort "over and above the call of duty" is to be entertained again.
What deselected you was quite possibly the HoD (and his emissary "colleague" of yours) that didn't want to live with someone who got them out of an embarrassing academic situation rather than any superiority of the chosen candidate.
Right now it's little consolation to you but the carry-on of your last temporary employer will soon catch up with them.
Commiserations and I hope you have a Christmas free of stressful ingratitude.
Upvotes: 1
|
2022/12/16
| 1,386
| 6,028
|
<issue_start>username_0: I am interested in studying abroad in America. I am currently doing a M.Sc. in Quantum Science and Technology in Germany and would like to start a Ph.D. in the same subject at one of the big American universities (MIT, Harvard, Caltech ...) after I've finished my degree. I noticed that every one of these programs has another master's integrated, for which I should already have completed the coursework. Can I skip that part's courses? If only a portion, how large of a portion could I skip?
Is there anyone who knows about this stuff (especially for the universities mentioned above) or who might've already gone through what I described above?<issue_comment>username_1: You would transfer classes you already took in Germany into the program at the US. Rules for transferring are specific to the university and will depend on the class you took and the class you want to transfer. There are also rules about the amount of classes (credit units) you are allowed to transfer into a program.
For a Ph.D., you usually do not lack credit hours to graduate, so the latter might or might not be a problem.
However, there is (in general) no way to just skip ahead to the thesis part that you know from Germany. You might have to pass a comprehensive examination and defend your thesis approval.
Upvotes: -1 <issue_comment>username_2: This question's in a bit of a gray area in that the main answer is, as a comment points out: Whether you can skip any courses, requirements, or recommendations tends to vary on a program-by-program basis. You'll need to contact Departments individually if this is a serious consideration.
To add a quick anecdote that may display the scope of variance, I can report different allowances within a **single** PhD program. The punchline in this particular circumstance is that the requirements were written flexibly. That probably does not generalize, however.
1. I was an undergraduate student at the same place as my PhD, but did a 1 year Masters in Europe in between. The Dept allowed me to skip the graduate courses I took with them during my undergraduate and nothing else. I sat "first year" and PhD qualifying examinations in the same semester, and eventually graduated in 4 academic years (accounting for the 1 year MSc). That last part basically required my PhD adviser to approve.
2. Some friends were undergraduate students in the same Dept. and then continued directly to a PhD in the same Dept. They too were not required to re-sit graduate classes they had taken as undergraduates, and I learned later were also allowed to skip some "first year" examinations as well. No other requirements waived, they eventually graduated in 5 academic years.
3. Some PhD students started directly from undergraduate programs at different US institutions. They had taken comparable graduate level courses as undergraduates, but were required to take those again at the PhD institution.
These are just 3 of several variants I encountered within one Dept. It's not going to be useful to you to conceptualize of US PhD programs as consisting of 1 European Masters + 1 European PhD. Each US PhD program has its own requirements which often includes some coursework that a European MS might cover.
Upvotes: 2 <issue_comment>username_3: Most US doctoral students start with only a bachelors. Few require a masters at any point, though many make one available along the way, sometimes just by filing paperwork and sometimes by writing a masters thesis (my case).
Actual requirements vary. In my doctoral program (math, but long ago), there were only two requirements: pass comprehensives and write a dissertation acceptable to the faculty.
But all students took courses as they were more advanced than undergraduate courses and were designed to enable passing comps, as well as giving a broad math background. Quite a number of the courses I took in the doctoral program had the same names as undergraduate courses but the level of instruction and difficulty was quite a bit higher. One way to think of it is that undergrad courses give the basics, but if you want real math insight you may need to go beyond what you can get in undergrad education.
Contrary to a comment, there was no "cohort" of students that needed to progress together. That may be different in other fields than math, I suppose. But you needed to pass those courses that you did take.
A doctoral student will probably have an advisor, not necessarily their research advisor, to help them choose whatever necessary courses are needed. So you get some guidance and aren't entirely on your own. The research advisor can be chosen later and isn't normally needed for admissions. If a student is already skilled enough in the required (comp exam) topics they might need to choose a research advisor relatively soon.
And, as mentioned in a comment, I always recommend students to "cast a wide net" of doctoral institutions. You don't just "enroll" at a place like MIT (or anywhere). You "apply" and are accepted or (most often) not. Top level institutions are very (very) competitive in admissions, accepting some fraction which tends to be smaller at the top since there are a lot of applicants and a limited number of slots. You need good letters of recommendation for admissions.
You won't need to repeat a masters, certainly. You might need some courses. But you probably won't need to take any courses in things you've already *mastered*, though mastering has a high bar. The masters program you are in might prepare you to take comps in some US institution or not. Probably there will be overlap. Possibly (probably?) there will be some gaps.
There are really two gates. The first is admissions. Your background and potential will be weighed and measured. The second gate is comprehensives, sometimes including orals, and folks may (will?) be looking for insight, not just competence. Now you can do the dissertation, which I'm not terming a gate, though there is a high bar there, also.
Upvotes: 0
|
2022/12/16
| 585
| 2,433
|
<issue_start>username_0: I am a graduate from a top 30 liberal arts college (according to usnews.com at least) and I've applied to several PhD in Computer Science programs. How do the departments of the, say, top 15-20 programs view applicants from liberal arts colleges? Are they still classified as "top schools"? Or is there any disadvantage in coming from liberal arts schools and not an R1/R2 university?<issue_comment>username_1: Opinion only - no data.
I think good grades in a cs major and strong recommendations from faculty who know your work would make your application competitive with similar applications from research oriented schools.
Upvotes: 1 <issue_comment>username_2: In general, there should be no issue for any "good" undergraduate institution, as long as the undergraduate degree is appropriate for study in CS. Places get reputations, and recommending professors get a reputation for accurate predictions.
Since this seems to be a US centric question, you don't even need to have majored in CS necessarily, and something like math is probably enough, though some skills (programming, data structures,...) would be helpful. Changing fields within bounds after a bachelors is pretty common.
If you are a CS student in a program accredited by ABET/CSAB engineering board (pretty typical) then you should have no issues at all about the institution, and acceptance will depend on all the other usual factors.
Some top undergrad programs don't bother with accreditation and use other means to demonstrate their quality.
Note that good R1 or even R2 doctoral programs wouldn't be able to run on just the output of other such places, as almost all students graduate from other sorts of institutions. The doctoral program itself recognizes this fact and is designed accordingly. There are good undergraduate programs and good professors in lots of places. You also are likely to get more faculty "attention" at a small place than at an R1. Fewer research opportunities, perhaps, but a typical doctoral program (some exceptions) doesn't start with research.
Upvotes: 1 <issue_comment>username_3: I'm a CS Professor at a US liberal arts college.
Our graduates regularly get accepted in top grad programs in the US.
Most of those who get accepted have at least some research experience in the form of an undergraduate honors thesis. Some have done REUs, and a few have published papers as undergrads.
Upvotes: 0
|
2022/12/16
| 532
| 2,170
|
<issue_start>username_0: I submitted my manuscript this July and got comments from two reviewers and two editors. It seems like there was one reviewer no in favor of my manuscript although s/he did not directly say to reject my manuscript though. The other one was positive and said to accept mine with a minor revision. Overall, editors told me in the email that they are prepared to accept my paper. BUT it will be SUBJECT to how I revise the original manuscript. So since then, I have been revising the manuscript and responding to every single question that editors and reviewers raised.
I am trying to be meticulous and working on revising the manuscript thoroughly.
However, I don't want to get my hopes up. It's kind of self-defense mentality I guess.
According to the editors' comment, are there high chances that my manuscript will be accepted as long as I do my job decently? or I don't know, this is my first time to try to get a paper published, I have any idea on this. Is there still possibility of being rejected even if the revision is not so bad?<issue_comment>username_1: Yes, it is possible to still be rejected, but you are doing the right thing to make it less likely. "It ain't over 'til it's over" as <NAME> once said.
You are probably in good shape. Get a colleague to review your updates, perhaps for some advice on the overall product.
Upvotes: 0 <issue_comment>username_2: You're trying too hard to parse what the editor is saying because you think that an editor is some kind of black box that operates on its own rules. But in reality, an editor is just a person like you and I, and if that person says something in plain language, it's a reasonable approach to believe that what they say is actually what they mean. That is, if you got a reply that states...
>
> that they are prepared to accept my paper. BUT it will be SUBJECT to how I revise the original manuscript
>
>
>
then that is exactly what you should assume: If you do a reasonably good job at revising the manuscript, then the editor will accept the paper. There is really not more to it than what it actually says -- read it as if your room mate said it.
Upvotes: 2
|
2022/12/17
| 2,147
| 8,663
|
<issue_start>username_0: I am a first-year PhD student in computer science and caught in a political dilemma.
My master thesis project was supervised by a researcher at my university, let us call him Dr. Anderson. There is currently a paper written by us under review on this topic. After I finished the master’s project, I started a PhD on a different topic at the same university under Assistant Prof. Bergman. Additionally, Prof. Candela (a more senior faculty member) serves as my secondary advisor. During my first six months, I have published one paper together with Prof. Bergman, and we are now working on a second paper.
Recently, Dr. Anderson told me that he is planning to write a new paper partially based on my master thesis work, and asked if I would like to be listed as a co-author again. I am glad that he recognizes my contributions, and from my perspective the more publications I have the better. Therefore I said yes.
However, when I later mentioned this to Prof. Candela, he promptly advised me to change my decision. According to him, publishing a paper during my PhD without Prof. Bergman would be bad for my relationship with her. She might think that I am spending my time working with Dr. Anderson instead of on the project she is paying me for. He also said that it would be bad for my career to become too associated to Dr. Anderson’s topic since it is less impactful than my PhD topic. His suggestion was to ask to be mentioned in an acknowledgment instead of being listed as a co-author.
I was surprised to hear Prof. Candela’s response. I think it sounds silly that Prof. Bergman would be so upset by this publication. Of course I would not spend much of my time on it, aside from reading through the manuscript and maybe providing some comments. Instead, it feels like this is part of some political game between faculty members.
If I decide to remain, I would be one of many authors on Dr. Anderson’s paper, which will be submitted to a lower tier conference. So in the end, it would not make a significant difference on my CV. To avoid any potential drama, it would be easiest to follow Prof. Candela’s advice. However, I still feel proud of the work I did with Dr. Anderson and would like to be properly recognized for it.
Would it be foolish to disregard Prof. Candela’s advice, and remain as a co-author? How can I talk to my main advisor Prof. Bergman about this without causing trouble?<issue_comment>username_1: [No one can serve two masters.](https://biblehub.com/matthew/6-24.htm)
You should have alerted your advisor at the start. The relationship with your PhD advisor is supposed to involve significant trust in both direction. Doing work with another prof without telling them is kind of whack, even if it is minimal work and time on your part.
If you had done so, a reasonable prof would have allowed you to work on the paper, provided it was not a huge amount of work. Your time is, to some extent, up to Dr. B. So you cannot spend arbitrary amounts of time on other tasks. Yet, it is also reasonable for you to keep relationships with other researchers. So closing them off would not be reasonable for Dr. B.
At this point I would suggest you go to Dr. B, and make a clean breast of it. Tell her everything, including Dr. C's suggestion. Be apologetic. Explain that you didn't know you should have told her. Promise to do better in the future.
Upvotes: -1 <issue_comment>username_2: While [username_1's answer](https://academia.stackexchange.com/a/191712/69206) expresses some good ideas, it seems a bit extreme unless your current supervisor Dr. B. is unreasonable or paranoid.
According to the question post you are not doing any more work, and are only asking about your name appearing in a long list of authors because your previous work is being used again.
That's certainly well within "standard operating procedure" and sounds like it could simply be a courtesy to you.
Only if Dr. C knows something they are not telling you should you worry about this. Perhaps there's a conflict, or a problem surrounding Dr. A or their research that they can't really tell you about right now?
If your relationship with Dr. B is good and everything smells on the up-and-up, just mention it to them including that the question is only about them adding your name to a list of authors, not doing anything new.
But I see no reason to follow the other answer's advice
>
> Explain that you didn't know you should have told her. Promise to do better in the future.
>
>
>
unless there's much more to this story than has currently been mentioned.
Upvotes: 4 <issue_comment>username_3: If a PhD student of mine were to publish the results of work that was done with a previous supervisor, I would:
* *like* to be told about their success, but certainly wouldn't *expect* to be
* congratulate them on their success and tell them how good it is to get early rungs on the publication ladder.
I can understand that the reaction of your current supervisor would be of concern to you but it suggests that they, rather than you or your behavior, are the problem.
Upvotes: 5 <issue_comment>username_4: Yes, Candela's response is incomprehensible and the reasoning offered (which I only know via your recounting of it) is ridiculous on the face of it.
However, there are things I do not know. For instance, Bergman may have been known to be irrationally touchy and "difficult" about this sort of thing in the past and C wants to spare you and himself the hassle.
That said, B has no real reason to obstruct your work with A unless there are some really clear indications that it detracts from your work with B.
The only way to get this in the clear is to discuss the matter with B.
If this does blow up in your face, please know you have done nothing wrong.
"a lower tier conference" - could this be a clue? Is A somebody C does not want you to be associated with, the black sheep of the department? I am stabbing in the dark, but it is by no means beyond the realm of possibility.
Upvotes: 3 <issue_comment>username_5: Had Dr. C suggested that you discuss this with your supervisor, then I would say this might be useful mentoring. I agree with one of the other answers: I would want to be informed, but if it genuinely took you little or no time, then I would fully support this work, as long as there was no other deadline. It is really very normal to be continuing work and collaborations from a previous position. One day Dr. B will be glad you are finishing work you did with her while you are at your first postdoc!
I hope Dr. C is not being sexist in assuming they have to defend Dr. B, but not ask her own opinion about the time of her own supervisee.
Finally with respect to the paper, if it is really a weak conference as you say, and you are doing more important work now, then *if* you run out of time, *then* you can withdraw and ask for only an acknowledgement. But especially early in your career, just having a few more papers can help you boost your h index. And anyway, it's just right and just that you should get credit for your work. As long as Dr. A isn't trying to take advantage of the fact you are at a more prestigious lab now, then I think it should be OK. It's slightly possible that this is what Dr. C. is trying to defend you from, but then I still don't know why they wouldn't talk to you and Dr. B about it at the same time.
Upvotes: 2 <issue_comment>username_6: There is no reason to decline authorship on the paper, unless:
1. You anticipate quality or academic honesty issues with the paper,
2. You expect getting it published and through peer review will
meaningfully detract from your current work, or
3. Your senior advisor is irrationally jealous.
If 1, you should politely decline to be associated with the paper (obviously), but it sounds like you are proud of the work and this isn't the case. If 2, you should have a frank discussion with both Dr. A and Dr. B about your availability and come to an agreement about your level of involvement with and credit for the work. This might mean authorship while limiting your work to a few reviews of the draft paper, or possibly being acknowledged in the work without full authorship. If 3, this is serious problem unfortunately and it is difficult to offer outside advice.
Re: "He also said that it would be bad for my career to become too associated to Dr. Anderson’s topic since it is less impactful than my PhD topic." - unless Dr. A is an outright crank, this isn't something worth worrying about. It is correct to focus on your PhD work, but not to the exclusion of publishing your existing MSc. work.
Upvotes: 2
|
2022/12/17
| 1,624
| 7,015
|
<issue_start>username_0: I'm a sophomore CS Major, and I have a final programming project due on Monday for one of my classes, but right now, I've been stuck on it for the past two days. I've tried several different approaches to finding the problem with my code, but I can't get it, and I can't ask for help from anyone (including TAs/the professor) since it counts as an exam. Should I just give up on it and submit what I have at the moment? It's really frustrating and stressing me out, and I have another final to study for.<issue_comment>username_1: In my view, the crux of your dilemma is contained in the final context statement ("... I have another final to study for") rather than in the simple question about whether to give up on an assignment or not.
For example, if your current focus is on an assignment for an elective that is not critical to your overall success, it would be obvious that you should change tack. You might also consider how likely you are to pass the other exam without further study, and how necessary (or not) it is for you to get something higher than a mere pass.
Whether for good or ill, a lot of university study involves trade-offs of the kind that are sometimes referred to as exploitation vs exploration. Do you invest more time in the problem you are currently working on, or do you explore other options (studying for other forthcoming examinations) in the hope that the effort you invest there might be better rewarded than in your current focus? Unfortunately, only you can answer ... but it is worth explicitly thinking about:
* your own estimate of the probability of you solving the assignment you're working on, as well as your estimate of the likely value (mark) that you'll receive
* your estimate of the probability of you successfully studying for the other upcoming exam, and of the mark you might achieve
* and finally, the relative value of the two different assignments/exams.
Upvotes: 3 <issue_comment>username_2: You're going to have to make the decision. I'll only point out that this is not dissimilar to another situation which you've surely encountered before.
Consider: you have 30 minutes left in an exam, and you're stuck on one question which you've spend 10 minutes on without progress. You could give up and move to another question which you've yet to look at, or you could keep working on it. What do you do?
Ultimately you need to do whatever maximizes your grade. If you keep working on this assignment, how many points are you likely to gain? If you give up and work on the other exam, how many more points are you likely to gain? If you give up, are you likely to still pass? How important is it for you to pass anyway (e.g. if it's a core course and you fail could your graduation be delayed by a semester)? You know your circumstances better than anyone else, so you will need to make the decision; nobody else can do it for you.
Upvotes: 4 <issue_comment>username_3: Let me suggest, first, that not all projects are successful, nor all approaches productive. You also have a fairly firm (hard?) deadline to produce *something*.
Rather than giving up on the project and not submitting anything, I suggest that you write up what you have done and where you are blocked.
Most important, include your best analysis about the nature of the block as you see it. What is it about the problem that creates a gate that you can't open? What thinking process led you to the current state.
---
Personal note: I once passed a comprehensive doctoral oral exam with flying colors, responding in just this way to the examiners. "I'm blocked, sorry, and this is why...". I was later actually praised for that response, since it showed some insight, though no solution.
Upvotes: 2 <issue_comment>username_4: >
> and I can't ask for help from anyone (including TAs/the professor) since it counts as an exam.
>
>
>
That's not a reason not to ask for help. They may not give you exactly what you ask for, but they may offer *some* kind of help - hopefully the kind which, if you've followed the course material properly, will indeed put you back on track.
>
> Should I just give up on it and submit what I have at the moment? It's really frustrating and stressing me out, and I have another final to study for.
>
>
>
First, consider asking for an extension due to the conflicting final - and if refused, perhaps even an extension with a grade decrease. Better that than just "bombing" the assignment.
Second, consider asking for some alternative form of evaluation given the circumstances.
A third option: In some universities, the final exams in courses are given twice - in two sessions with a different exam for each - where the student can choose to take only one, or both, but with the second session's score always replacing the first session's. Perhaps something like that can be arranged with this course of yours? e.g. delaying the evaluation (and your grade) until the end of the next semester, taking the final assignment with them?
A fourth option: If the course staff refuses everything, consider consulting a colleague who's also taking the assignment, and whom you know has completed it or has fully worked out what is to be done. Tell them you're completely stuck, and ask for some guidance. Guidance, of course, is not the same as access to your colleagues solution - but something to help you become unstuck. This final option may get into some ethically-gray territory and also depends on the social and formal conventions of this kind of assignment-taking at your institution, so take this suggestion with a grain of salt.
Upvotes: 0 <issue_comment>username_5: ### Does it compile?
If code doesn't even compile, generally you'll fail a programming assignment. So at least get to that point. If that's your problem, you have plenty of compiler error messages to tell you what you've done wrong, and Google has more info on fixing them.
### Does it run at all?
If it crashes at some point during running, stop the code at a random point and see if it gets there. If it crashes before that, stop at a point before that and try again, until you find a point where execution gets there. After that, work forward until you find where it's crashing.
Note that running in a debugger will make this easier. The debugger will give you a stack trace telling you what function it crashed in, and the nested function calls which led up to that crash.
Once you're past these two steps, you're into the possibility of a "gentleman's C". In which case document your known failure - what the inputs are, and what outputs you're getting versus what outputs you're expecting. This demonstrates you know good practice for debugging, and that you've isolated what's going wrong. It also points the assessor at where they'll likely check for the bug and how to check what's right versus what's wrong.
And not infrequently, the process of writing this down might trigger a spark of realising what you've done wrong. :)
Upvotes: 0
|
2022/12/17
| 1,843
| 7,971
|
<issue_start>username_0: I have a new collaboration with Dr. Y: together with Professor X, we applied for funding for a project based on a working paper that I had worked on for more than 1 year. Dr. Y provided some successful applications he had and also edited the funding application. I was happy with his contribution in the funding application stage.
We were awarded with the funding and worked on the project. The project is not in the area of Dr. Y and he did not contribute any ideas to improve the work in the project (I included Dr. Y because Professor X advised that Dr. Y had been successful in applying that type of funding). However, Dr. Y did not even bother to put time to edit project reports properly when the research assistant sent these reports. His editing was minimal and numerous spelling errors still remained.
We completed the project and the group discussed to publish 3 papers out of the report. In preparing the first paper, I hoped Dr. Y would contribute more, but it is the same: he only edited the paper minimally. Now I don’t see any points in including Dr.Y in the other 2 papers, but Professor X insisted that I should. He said that Dr.Y is very gossiping and is going to be a professor soon and I should not upset Dr.Y.
Professor X has been my supervisor and mentor for more than 8 years (I now have tenure). Should I follow his advice? What is the cost if I do not and also upset Professor X? What is the risk if I include him in the two papers?<issue_comment>username_1: In my view, the crux of your dilemma is contained in the final context statement ("... I have another final to study for") rather than in the simple question about whether to give up on an assignment or not.
For example, if your current focus is on an assignment for an elective that is not critical to your overall success, it would be obvious that you should change tack. You might also consider how likely you are to pass the other exam without further study, and how necessary (or not) it is for you to get something higher than a mere pass.
Whether for good or ill, a lot of university study involves trade-offs of the kind that are sometimes referred to as exploitation vs exploration. Do you invest more time in the problem you are currently working on, or do you explore other options (studying for other forthcoming examinations) in the hope that the effort you invest there might be better rewarded than in your current focus? Unfortunately, only you can answer ... but it is worth explicitly thinking about:
* your own estimate of the probability of you solving the assignment you're working on, as well as your estimate of the likely value (mark) that you'll receive
* your estimate of the probability of you successfully studying for the other upcoming exam, and of the mark you might achieve
* and finally, the relative value of the two different assignments/exams.
Upvotes: 3 <issue_comment>username_2: You're going to have to make the decision. I'll only point out that this is not dissimilar to another situation which you've surely encountered before.
Consider: you have 30 minutes left in an exam, and you're stuck on one question which you've spend 10 minutes on without progress. You could give up and move to another question which you've yet to look at, or you could keep working on it. What do you do?
Ultimately you need to do whatever maximizes your grade. If you keep working on this assignment, how many points are you likely to gain? If you give up and work on the other exam, how many more points are you likely to gain? If you give up, are you likely to still pass? How important is it for you to pass anyway (e.g. if it's a core course and you fail could your graduation be delayed by a semester)? You know your circumstances better than anyone else, so you will need to make the decision; nobody else can do it for you.
Upvotes: 4 <issue_comment>username_3: Let me suggest, first, that not all projects are successful, nor all approaches productive. You also have a fairly firm (hard?) deadline to produce *something*.
Rather than giving up on the project and not submitting anything, I suggest that you write up what you have done and where you are blocked.
Most important, include your best analysis about the nature of the block as you see it. What is it about the problem that creates a gate that you can't open? What thinking process led you to the current state.
---
Personal note: I once passed a comprehensive doctoral oral exam with flying colors, responding in just this way to the examiners. "I'm blocked, sorry, and this is why...". I was later actually praised for that response, since it showed some insight, though no solution.
Upvotes: 2 <issue_comment>username_4: >
> and I can't ask for help from anyone (including TAs/the professor) since it counts as an exam.
>
>
>
That's not a reason not to ask for help. They may not give you exactly what you ask for, but they may offer *some* kind of help - hopefully the kind which, if you've followed the course material properly, will indeed put you back on track.
>
> Should I just give up on it and submit what I have at the moment? It's really frustrating and stressing me out, and I have another final to study for.
>
>
>
First, consider asking for an extension due to the conflicting final - and if refused, perhaps even an extension with a grade decrease. Better that than just "bombing" the assignment.
Second, consider asking for some alternative form of evaluation given the circumstances.
A third option: In some universities, the final exams in courses are given twice - in two sessions with a different exam for each - where the student can choose to take only one, or both, but with the second session's score always replacing the first session's. Perhaps something like that can be arranged with this course of yours? e.g. delaying the evaluation (and your grade) until the end of the next semester, taking the final assignment with them?
A fourth option: If the course staff refuses everything, consider consulting a colleague who's also taking the assignment, and whom you know has completed it or has fully worked out what is to be done. Tell them you're completely stuck, and ask for some guidance. Guidance, of course, is not the same as access to your colleagues solution - but something to help you become unstuck. This final option may get into some ethically-gray territory and also depends on the social and formal conventions of this kind of assignment-taking at your institution, so take this suggestion with a grain of salt.
Upvotes: 0 <issue_comment>username_5: ### Does it compile?
If code doesn't even compile, generally you'll fail a programming assignment. So at least get to that point. If that's your problem, you have plenty of compiler error messages to tell you what you've done wrong, and Google has more info on fixing them.
### Does it run at all?
If it crashes at some point during running, stop the code at a random point and see if it gets there. If it crashes before that, stop at a point before that and try again, until you find a point where execution gets there. After that, work forward until you find where it's crashing.
Note that running in a debugger will make this easier. The debugger will give you a stack trace telling you what function it crashed in, and the nested function calls which led up to that crash.
Once you're past these two steps, you're into the possibility of a "gentleman's C". In which case document your known failure - what the inputs are, and what outputs you're getting versus what outputs you're expecting. This demonstrates you know good practice for debugging, and that you've isolated what's going wrong. It also points the assessor at where they'll likely check for the bug and how to check what's right versus what's wrong.
And not infrequently, the process of writing this down might trigger a spark of realising what you've done wrong. :)
Upvotes: 0
|
2022/12/17
| 1,680
| 7,263
|
<issue_start>username_0: I am a second-year PhD student in an American university. I am having an issue with relationships in my lab. Right now I am very jealous of another student in my lab, and I start to hold grudges against him. Let’s call this student X.
We have the same advisor, but I feel my advisor likes X a lot more than me. X is working on one of the most promising project that my advisor is highly interested in. The advisor spends lots of time with X in their meetings. In their meetings, when X asks a question, the advisor would answer him patiently and explain to very detail until he understands. When I had my meeting, my advisor is waiting for me to say something and mostly give short answers and our meeting would end very soon. I feel compared to other students, my advisor has no expectations on me, and doesn’t care about my work.
I know that my thought is bad, and it might not be the case that my advisor has a favorite student. I also think X is stronger than me academically. But I am very upset when I see X having meetings with my advisor. In our lab, our advisor usually comes to the student lab and has meetings with each of us one by one. I start to hate having meetings with my advisor because I would have to watch X having meetings with him before/after mine. And I start to hate both of them all together. Indeed, for some personal reasons, I am jealous of both of them.
But I don’t want my negative feelings to impact my study. I don’t know what to do.<issue_comment>username_1: It is all in your head. You must not give any mental space for such thoughts. It simply depresses you, and then it consumes you, and then you end up not having spend sufficient time on research. Acknowledge the negative thoughts, know that they are made up by you, and most likely do not reflect reality, and throw them out!
That's what I do, and disclaimer: I am no psychologist.
Upvotes: -1 <issue_comment>username_2: I would like to start by saying that I am sorry you've been experiencing such negative feelings and I fully understand how those can be time and mentally consuming.
Before suggesting a potential line of action, let me start by pointing out that "liking" should not be the issue here. It is indeed ok for people to like certain people more than others on a personal level. (For example, it is ok for an advisor to socialise outside the lab with one PhD student, but not the others.)
What is not ok and what should matter here is the differential treatment in terms of consideration, opportunities and resources you both are allocated to achieve your work in the best conditions. To decide whether this is the case or not, I would recommend to try to assess the situation objectively, by, e.g.:
* asking the opinion of someone that is exterior to the situation yet able to confirm or infirm the claim.
* compiling some objective facts regarding some differential treatment your advisor might be giving to X compared to you. These can take various forms, for example estimating objectively the duration and frequency of meetings with your advisor, comparing opportunities to give presentations or attend conferences, comparing allocated budgets and so on.
If indeed you manage to objectively confirm the claim for yourself (and preferentially confirm it with someone exterior), then I would suggest to try discussing with your advisor about it, backing up your claim with the previously compiled objective data if needed.
In any case, I wish you good luck and hope you can make the best out of this unfortunate situation.
Upvotes: 2 <issue_comment>username_3: If labmate X is more able to produce the work that your professor is interested in, then it is not entirely surprising if your professor spends more time with them. Having said that, even if it is not surprising, it may not be just. Here are a few suggestions, you might try one or a few of them, they are not mutually exclusive.
* Tell your professor that you are worried you are not achieving as much as labmate X, and ask their opinion of what to do. It may be your professor has not noticed that they are spending more time with X than you, and this may be a polite way to draw attention to this fact. Or, it may indeed be that there's something X is doing that you are not that you can fix. Finally, it might be that your professor is not the best match for your PhD interests, and opening a conversation could lead to transferring amicably to another lab. But I would start out by asking the question in the least confrontational way possible, but ensure that there is a conversation.
* Suggest that rather than individual meetings, your lab has group meetings, so that you can all learn from each other's projects and mentoring, and also possibly find ways to help each other out. I went through a few PhD supervisors when I was a student, and I also just hung out with a few other labs, and my favourite labs and supervisors had group meetings. So now I do this with my own PhD students. Of course for a long conversation, you occasionally have to have one-on-one meetings. But in general, group meetings would probably save your professor time (since you would all learn at once), and increase the learning, fairness, and camaraderie for all their PhD students. You might want to suggest this to the other PhD students first, then bring it to your professor, or it may come naturally out of the previous conversation.
* Realise that is possible that your getting less attention is an illegal slight against you. Sometimes either implicitly or explicitly, professors favour students more like themselves, in terms of gender, background, or other interest. Again probably this would be an accident that will get corrected after your professor notices, but sometimes such incidents have to go to outside arbitration. Every modern university should have procedures about this. In fact, for my PhD, there was explicitly a research advisor and an academic advisor, and about the only thing the academic advisor did was give you advice if you were having trouble with your research advisor. That happens to most people at some point! but is usually fixable.
Finally, I want to share a story with you. I was kind of a rebellious PhD student who followed my own interests, so when I started out as a professor I took a lot of students with a lot of interests, and tried to let them be themselves. As a result of this, I didn't get much research on my own interests done. If I'd been in America, maybe I wouldn't have gotten tenure, but I was in the UK and it just took a long time for me to get promoted. Once I started having enough PhD students that I could have group meetings though, then it was great for me that students saw I was more interested in some projects than others, and more happy to help work on some projects than others. Then each student could choose for themselves whether to work on projects that were congruent with both their and my interests, or to work on things of interest only to themselves. As long as I did a good job and helped everyone graduate and get a job, I don't think it was a problem that I spent more time writing with the students working on the projects that were a part of my own research interests.
Upvotes: 2
|
2022/12/17
| 2,544
| 10,698
|
<issue_start>username_0: I am a final year PhD student, in the process of writing the thesis and submitting the manuscripts of my work done. Unfortunately, the work-environment in my lab was quite toxic, with several bullying- issues, personal (verbal) attacks and condescending behaviour (particularly towards students like me, female foreigners). Soon I am going to submit my first first-author manuscript, including one of my previous co-worker who was condescending towards me. I had forwarded this issue to our PI several times, but he did not care. Right now I am thinking of actually not adding this person as a co-author, as his bad behaviour also cost me a lot of time.
Is that legal? Is discriminative behaviour of your co-authors a reason to exclude them? I would put his name in the acknowledgements, but I think as a co-author his behaviour was too bad. What do you think about it?<issue_comment>username_1: In case the person in question contributed substantially enough to the paper, you cannot exclude them as an author due to personal differences. These two things have nothing to do with each other.
If the other person did not contribute substantially to the paper, then they shouldn't be an author in the first place.
Even though being treated badly, sexist or bullied is unarguably shitty and I am sorry that you had to go through that, denying someone (justified) authorship because of that is not the proper way to handle such a situation and might (if you go through with it) lead to problems afterwards, as it is their "intellectual achievements" that you are selling as your own if you exclude them as author.
Upvotes: 6 <issue_comment>username_2: I am so sorry for that, but I cannot support your action.
If the coworker did contribute significantly to your work, excluding him from the list of authors would be unethical, and it may lead to other problems. Academic contribution should be the only criterion to determine who should be included in the list of authors.
>
> Is that legal?
>
>
>
I don't know as it is dependent on where you are doing your PhD; however, I doubt that it is legal.
Regarding your relationship with other coworkers, unfortunately, you have to deal with it seperately.
Upvotes: 4 <issue_comment>username_3: Credit/coauthorship is a function of contribution independent of behavior, so just because you don't like someone doesn't mean you should deny them co-authorship IF they contributed.
Include this person as co-author but do not work with them any further. Getting into a credit dispute with a toxic person is not worth your time and energy and could blow up into something worse. Remember toxic people are much better at being toxic than non-toxic people, so assuming you have represented all parties and yourself fairly in this question do you really want to get into a dispute/shit-fight with this person? You're basically playing them on their home turf.
Let's call the problematic co-author "Q".
You are (and I don't recommend this at all) even free to say things like "F-- Q" anytime someone talks to you in person about your paper, but the minute you drop "Q" from the co-authorship list, you now have a stain on your reputation/psyche of "denying credit where it was due" and that's (my opinion here so take with salt) an entirely different level of bridge that you have crossed compared to just announcing "F--- Q" at every opportunity you get. I wouldn't want that reputation, I wouldn't want that on my conscience, I just would want to "warlessly" get away from "Q" as fast as possible if I was in your position.
Upvotes: 3 <issue_comment>username_4: "as a co-author his behaviour was too bad."
His authorship is not about how he behaved, but reflects his contribution. So, as others have said, you should not remove him based on that.
I am really sorry for you, I can somehow relate as I have experienced a similar situation. But may I tell you that I do not support your "solution" at all. By excluding him you are confronting (it is also a form of communication) your colleague and begging for more conflict! Ask yourself what do you want to achieve by doing that? Do you want the conflict to escalate? Or are you open to resolve it? And if the situation is so that you cannot solve it, then accept that, try to move on and do not waste more energy.
I would suggest you try to talk to her/him. I know, also from my experience, that this is not always working, but at least you can say you have tried everything from your side. Good luck!
Upvotes: 1 <issue_comment>username_5: To give a slightly different take from the current answers: It is not ethical to publish a paper with the contribution of a person without crediting them properly as author. However, there might be situations in which you are able to cut out that contribution and publish the paper without it. Either because the paper is good and understandable without it or because the part can be re-done by somebody else. In the latter case, be sure to avoid plagiarising the removed parts, e.g., by having a new person do that who has no knowledge of the original text. Both approaches only work in cases where the authorship is not too intertwined, of course.
Upvotes: 3 <issue_comment>username_6: I'm sorry you're going through that, and while excluding this person from the list of authors is probably not the right call, I hope you can find resources at your university to help you get through this experience and take disciplinary action against this person if that's what you want to do.
It seems to me a lot of the answers are saying that discrimination is something "between you and said co-author", which makes it sound like a personal conflict with both sides at fault. Instead, I think people are forgetting that discrimination is a serious and pervasive issue in academia. A PhD is demanding enough on its own, but dealing with systemic issues like that must add even more burden and be extremely frustrating. You are not alone in this and hopefully (I don't know where you go), there are people and resources dedicated to such situations.
(edit: rephrased as answer)
Upvotes: 3 <issue_comment>username_7: Academia and research is quite tricky environment to handle. There are three layers of hierarchy and three layers of relationships.
1. Institutional hierarchy
The head is Rector (or director), his subordinates are deans etc. Your position in Chain-of-Command is Rector-Dean-Department Head-Advisor-You. You are the least significant bit here.
2. Project hierarchy (or Research hierarchy)
The head is the Project Leader of the project. It can be your advisor or any member of the faculty. The Chain-of-Command can be Project Leader-You or Project Leader - Your Advosor - You.
3. Article hierarchy
Here the head is the First Author (You) and the Chain-of-Command is You-other contributors.
The problem is that different issues are to be resolved in appropriate hierarchies.
The problem you describe is not to be resolved within Article hierarchy. Those who contribute are to be in the author list; those who don't contribute are not to be in the list. Period. Actually, it could have been resolved by expelling them from the team before they contributed. But this is task for the other hierarchies and it may result in change of Article hierarchy.
The issue is to be resolved within the first type of hierarchy because it is workplace misbehaviour. Talk to your advisor, because they are your superior in the tree. If it fails raise the issue to the Department Head and/or the student's unions. Keep it factual and keep it assertively active.
The only misbehaviour you can punnish by leaving someone from the author list is author misbehaviour: Not providing texts, providing false data etc. And the result is not using their inputs at all so their contribution is none or negligible so you can defend the omission. Bully, verbal assaults and simillar are a serious problem, but not to be resolved this way.
Upvotes: 2 <issue_comment>username_8: I am sorry you are facing this issue and you had an unpleasant, abusive working environment.
One may argue we are hearing only one side of the story: in my humble (and arrogant) opinion, behavioral abuses are fully subjective (like sexual abuses), which means that if you felt you were abused, you were. No need to hear the other side of the story. You are the victim, the co-author is the perpetrator.
Unfortuntely there were no mechanisms in place to have your feelings listened, so I would go as far as saying that the system was perpetring abuse on you, so there are personal and systemic responsibility.
Now, back to your question. The paper is a different system than your lab/department. The authors on a paper are not judged on their efficiency, nor on their behavior. When you say "*his bad behaviour also cost me a lot of time.*", well, also tutoring a student costs a lot of time, often unnecessarily, however if said student contribute enough to a certain paper, they deserve autorship.
So, on one hand the paper is a different "entity" than your work. I even know of authors lists that carry in themselves couples and lovers, or authors coming from countries being on different side of a war, with all the involved emotional strains.
On the other hand, if you feel really confident, you can think about venting out in the acknowledgements [1], thanking for an unhealthy working environment. It is an extreme measure, but hey, at least your paper is forever linked to the co-authors you do not want but also to a strong sentence regarding their "soft skills". Regarding future consequences for your career: there are few people that read the acknowledgments in the papers from candidates to postdoc&co.
I personally would consider such an acknowledgment as a plus from your side (however I have bad news for you: I am out of the academia, mostly because I hate that universities&co. should set the example, regarding personal respect and freedom, instead they are as bad as they were decades ago and they sacrifice everything in the name of "scientific progress"[2])
[1] be strong, and think about yourself, we are all very brave at typing on a keyboard, but as they say in the Netherlands "the high trees get all the wind" ...
[2] I know, publishing a paper with the co-author is kind of sacrificing all your suffering on the altar of progress, but it is done to set a rule about co-authorship. Every one that contributed deserve authorship, because good or bad judgement is personal (what if your PI decides you behaved badly by putting strain in the working environments by expressing your opinions and exclude you from the paper?)
Upvotes: 1
|
2022/12/17
| 4,912
| 20,389
|
<issue_start>username_0: I teach in a small college and one student seems to have an obsessive need to make comments aloud during class. Sometimes those comments are funny (he is the class clown) but sometimes they feel abusive and critical when aimed directly at me. Yesterday when I was reviewing an exam he made it a point to say twice that he learns better from my supervisor, "I want \_\_\_\_\_\_\_\_\_\_\_ to teach us; I learn better from him." Last night I also mentioned the wrong date of their next class. I apologized for misspeaking and corrected myself but not before a negative comment was made by him about being wrong.
In the past, I remember distinctly him making a negative comment when I made a mistake, "come on man, get it together" and a few minutes later when he couldn't answer my question in class, I jokingly said, "well, as you said to me, 'come on man, get it together.'" The whole class laughed aloud at my comeback to him.
Last night's comments really stung. When he commented that he wanted to learn from my boss, I agreed that my boss is brilliant and that he should reach out to him, to which he said he already did.
What do you do or say when a student basically tells you that you are not a good teacher, to your face, in front of the entire class?? Thank you!<issue_comment>username_1: You should check the rules and regulations of the college to see if he violated any rule. If he continues doing this, kick him out of the class. If he does it again, you need to take it further.
You cannot let him (or anyone) pushing you around like that. Once students know they can do it without any consequence other than hurting you, they will always do it.
I don't think "lecturing" him is a good idea. It is something that you cannot fix easily. A student with such a behavior will not listen to you. He probably needs to fail a few times (because of his behavior) to become a better man.
Years ago, I was such a student. I mocked teachers for their academic "incompetence" because I thought I knew way more than them. Years later, I regretted doing that so much. Such an attitude could not take one so far.
Furthermore, it could also be a general problem of *many* students studying at that small college. Probably, you will need to learn how to deal with it as well.
Upvotes: 4 <issue_comment>username_2: #### Keep things open but feel free to return fire
As a preliminary matter before giving advice, I'll just point out that the ideal of academia is generally pretty open about what you can say in class, both as a teacher and as a student. In particular, the ideal of "academic freedom" generally includes the freedom of academic staff and students to give their opinions and beliefs in relation to the subjects of study (which arguably would include the quality of teaching or resources for those subjects). This is valuable because it allows students to learn to remain calm and learn in contexts where they encounter unpleasant ideas or discussions. In view of that, so long as comments remain somewhat on topic to the lecture/subject area it is usually best to allow that commentary. The idea is to try to create an environment where everyone is an adult and is able to behave as an adult.
In the present case, I would counsel you to allow this student to make his adverse remarks about the quality of your teaching, etc. However, this could also be an opportunity to teach this young man that academic freedom and the resultant freedom of speech *is a two-way street* --- if he wishes to make negative commentary about his lecturer in front of other students during that lecture, he should be prepared for critical commentary to be returned in kind. Therefore, I would say that you are doing exactly what you should be doing --- let him make his comments but return fire with your own zingers. Try to keep things light-hearted to make everyone laugh and avoid escalating the conflict, but feel free to "put him in his place". You are probably not the best lecturer on campus and he is probably not the best student, so if he wishes to draw attention to the former fact, feel free to likewise draw attention to the latter. If this student insults you publicly, feel free to insult him right back (in a proportionate manner), ideally in a light-hearted way that will entertain the other students and keep things from escalating. Next time he tells you that he would much rather be taught be your supervisor, tell him "I'd rather be on a beach sipping margaritas with [insert beautiful movie-star], but it looks like we're both stuck with each other". If you are concerned that your return fire might escalate things, you might also consider having a short conversation with this student (or even with your entire class) just letting them know that you wish to allow an open dialogue in class, and treat them as adults, but that this is also a two-way street.
From what you have described, it sounds to me like this student has poor social skills and is not good at gauging when a comment is a legitimate observation on a topic and when a comment is crossing the line into being an insult. Given his lack of ability in this area, being on the receiving end of some of your own responsive barbs might actually allow him to see what it is like when someone publicly draws attention to your shortcomings, and it might teach him some humility.
Finally, it is worth noting that some of what you're encountering may become easier as you develop your own teaching competency and become more self-assured in your own teaching. You will get better and better at teaching as you practice it more and remarks like this won't sting at all. If I were to make a mistake in a lecture and get teased by a student it would just make me laugh at myself along with the other students and then I would probably see if I can get them back with a (light-hearted) zinger later.
Upvotes: -1 <issue_comment>username_3: Before you pursue any of the other suggestions proposed here (return fire, kick him out), talk to him one on one about this at an opportune moment, i.e. after class or during office hours in case he visits them.
**Do not** focus on how his comments "stung". **Do** focus on why his comments are inappropriate to make in class. That is, do not pretend that his comments did not hurt you, but do not dwell on it. Rather, make a clear, brief, and defensible argument why his comments are inappropriate.
I would tell him that he can raise concerns about your quality of teaching with you in private, but that class time is not the time to make snide remarks about your teaching, because it disrupts the teaching process. Of course, you should only say this if it is actually true, i.e. if the reason that you want him to stop is because you actually believe that it disrupts the teaching process, rather than because his comments sting.
Example: "Look, judging by your comments in class, I see that you don't seem to have a lot of respect for me as a teacher. That's fine with me, you don't have to respect me personally, but you cannot keep making these kinds of remarks in class."
Then, depending on your teaching philosophy, you can supply a truthful reason why. For instance, because a certain basic level of authority is required for you to run the class smoothly, and what he's doing goes beyond constructive remarks and well into the territory of undermining that basic authority. Students are free to decide whether or not they like your teaching, but they don't need him egging them on.
Be brief and do not engage in discussion about this. State this as a fact and be prepared to back it up.
**Decide beforehand** how far you are willing to escalate this. If they are unwilling to be reasoned with, decide whether or not you should tell them that you require a certain basic level of respect from your students as a prerequisite of participation in the class.
Upvotes: 6 <issue_comment>username_4: **This is the adult world, not school.**
>
> What do you do or say when a student basically tells you that you are not a good teacher, to your face, in front of the entire class??
>
>
>
At the third level you tell them to be quiet or leave.
You might tell the entire class that this is the adult world and here they are expected to listen and learn, not behave like children.
This student is behaving like some secondary school kids do, and I would be surprised if they did not behave just as badly in high school. But the third level is the adult world and the adult world does not tolerate this behavior. Neither do you.
The remarks they are making are *designed* to annoy you. They're *gaslighting* you. Do **not** engage in tit for tat exchanges — that is what they want. That's what entertains them.
Check with your academic office (or supervisor) what the allowed punitive actions are. There will be a formal approach to this, usually with increasing levels of severity.
Typically I'd expect you to be required to do something like email a formal warning to the student that they must behave or could be subject to formal punishment, including suspension or expulsion from lectures or even the school.
But the bottom line is that this is the adult world and you hold all the cards. Do not play their game. Play the game as set out by the institute's rules for behavior and conduct. These give you power. Use it.
>
> Last night's comments really stung. When he commented that he wanted to learn from my boss, I agreed that my boss is brilliant and that he should reach out to him, to which he said he already did.
>
>
>
This was a mistake. You engaged with them using their rules. Don't do this again.
Upvotes: 5 <issue_comment>username_5: You describe the student as being somehow evil or bad. Instead, they might have a neurological disorder such as something like [autism spectrum disorder](https://betahealthy.com/20-warning-signs-early-symptoms-of-autism-you-should-never-ignore/?utm_source=bing&utm_medium=cpc&utm_campaign=Autism%20Symptoms&utm_term=autism%20symptoms&utm_content=autism%20symptoms) that they have never overcome. Some of the symptoms listed at the above link sound similar to what you are seeing.
If this student generally behaves like this in other classes than yours then this might be the explanation. You can (probably) explore this with your colleagues who have had interactions with the student.
How you deal with it depends on rules and regulations. You might be able to refer them to a counsellor. You might, yourself, get advice from a counsellor in how to deal with such people.
Among other things, people on the spectrum tend to have poor social interaction skills. This is just what you are seeing. Punishment, then, isn't what is called for, but professional help may be. If this is the actual explanation you won't be able to correct it on your own.
Note that the privacy of such folks needs to be respected.
Upvotes: 3 <issue_comment>username_6: I get the feeling that you have not been a lecturer that long.
The best plan here is for you to meet that student a.s.a.p. in your office and talk through your - reasonable - expectations for all those students during a lecture. Make it clear that a lecture is education primarily - any humor is incidental. Tell him you don't want to continue with him commenting - negatively or positively - on your teaching capability. If he has an issue with your teaching he really must decide to study the course independently; you are quite happy for him to be returned his fees towards your module.
Tell him to think about this and come back to you with a decision **before** the next scheduled lecture in your module.
Say all this in a calm matter-of-fact manner, no hint of personal slight.
If he shows up to the next lecture without responding to your decision point first, then ask him to leave pending a discussion with you. If he refuses to leave, address the class saying that when Mr X finally decides on whether my lecturing is up to his standard or not, I will continue. Then postpone that lecture to a later period and just before you leave invite the class representatives to meet you in your office to elaborate on this matter.
Upvotes: -1 <issue_comment>username_7: This depends on where you are. In the UK some universities have personal tutors for students. In my first year teaching here, my English accent was funny and two students kept laughing at me the entire session. A quick email to the lecturer, with the implicit next step of talking to their personal tutors, quickly corrected the situation. Also, if the student has some particular condition, some unis let you know via email.
Upvotes: 3 <issue_comment>username_8: Your only option is to flash a brief smile when he does that, spend as few words as you can on any responses (basically, *his* measure of succes is how much he can distract and unsettle you). Think of phrases such as: "Good point. Anyone else?"
Your responsibility is to the learning experience of everyone in the room, and while they may find him amusing, ultimately they will feel he detracts from the value they get out of an expensive education. So most others will appreciate your firmness, even if they do not speak up.
Also (and there is some evidence of this in your question) this kind of student will always try to ingratiate themselves with your boss, their boss, and their boss. The basic mechanism is this: at some point the student underperforms and you have no choice but to give a poor assessment, which they have now ensured is pre-tainted. "He have me a C because he had it in for me." (I know cases of guys [it is usually guys] who played this game with several supervisors all the way to a PhD. In a way, it is better if you've dealt with them as undergrads or master's students, because it is so much worse with a PhD/grad student.)
Do not be drawn into this game.
Upvotes: 2 <issue_comment>username_9: I feel that none of these answers are adequate, because none of them direct OP to take ownership of their role in this.
If, as stated in the original question, this has been going on for some time, then you have established for your students that this classroom environment is informal and that snarky "class clown" comments from the students are allowed. Had you established from the outset that they were *not* allowed - as I can assure you many other instructors do - this would not be happening.
After allowing "funny" comments from the class for an extended period of time, it is not appropriate to suddenly try to seek out a way to punish the student because you didn't like one of the comments. You made this bed - probably in a misguided attempt to seem easy-going and likeable. This is how that bed is, sometimes, and is one big reason why other instructors enforce more restrained environments.
The best thing for you to do is to meet with the student privately and frankly tell them that although you have allowed "quips" from the class in the past, going forward they will not be allowed. In the course of doing this, specifically acknowledge that you realize that your role in setting the previous behavior boundaries, and explain that you have decided to make a change. There is no need to talk about your personal feelings about any particular comment, and in fact doing so undermines your position.
Upvotes: 3 <issue_comment>username_10: I find none of your examples to be an insult (as you called it in your subject). Not constructive, but far from being an insult.
Probably nobody likes to hear things like "I want \_\_\_\_\_\_\_\_\_\_\_ to teach us; I learn better from him.", but it might even be true. Tell him that he's not going to choose his teachers, case closed.
"Come on man, get it together" is not constructive either, but you should learn to ignore such things.
So, in my humble opinion: Take a step back and rethink how bad his remarks actually are. Do not shoot back. It's a fight one doesn't want to get into, and I'm not sure you would win it.
And... teaching is not about winning against your students in the first place.
Upvotes: 2 <issue_comment>username_11: The boat may have sailed with this particular student or even class, but for the future (in new/fresh settings), I would advise this:
* If you are generally a humorous character yourself, joking around, being very jovial, dial it back in class. You can still be that way before and after, but make it a habit to be formal or maybe even a little stiff in class. The class room is not entertainment - you are working there, and some of the students are as well. (This is assuming from your SE handle that you're teaching a "hard", technical subject, anyways.) Your teaching should be clearly visible as an earnest affair, making the misbehaviour of the student even more obvious.
* Do not feel alone in this. The troublemaker is not only disrespectful to you, but also to the other students, and disrupting them as much as you. You can be sure that a good portion of your other students are quite annoyed as well, assuming most of them are there out of their free will and not due to compulsory education.
* Do not engage with silly comments. Treat them as ugly presents handed to you - you do not need to accept them. You can observe them and decline to accept, if that makes sense.
* If at all, try to be compassionate. Have pity with the student who has to behave like that, and will very likely at some point in their life run into problems due to that because they maybe won't work too well in work or private settings.
* Hurtful comments take up only a second of the time of the troublemaker, but can work on *you* for days or months. You have spent time thinking about it; you spent time writing this question; you are spending time reading the answers, and so on and so forth, while the troublemaker has very little investment, and probably a modicum of joy. Be aware of this imbalance and use it to lessen the impact of the comments in your own mind.
* There is no need whatsoever for you to take the "advice" or a comment from someone to heart, unless you respect them. They are just moving their vocal cords and pushing air molecules around. They could be ill or a bad person, but certainly not a leading figure for you. Reserve that for people you look up to.
These tips are not meant to reframe the situation, but to give you tools how to lessen the impact in your own mind. I assume the student in question is not running around drawing a bleak image of you with your supervisors, or anything like that, and thus really hurting your career.
It is generally hard to impossible to change other people; it is much easier to change yourself. If indeed you feel a need to change the situation outwardly, instead, the proper measures would depend heavily on your local culture and regulations - best discussed with your own supervisor or support structures of your uni.
Upvotes: 1 <issue_comment>username_12: >
> Student continuously makes insulting comments during my lecture - how do I ***handle him?***
>
>
>
After reading the actual descriptions of the "insults" the first thing that came to mind was [neurodiversity](https://en.wikipedia.org/wiki/Neurodiversity#Within_disability_rights_movements) which is discussed in [@buffy's answer](https://academia.stackexchange.com/a/191773/69206).
It doesn't sound like [Tourettes](https://en.wikipedia.org/wiki/Tourette_syndrome) necessarily, but that's certainly something that one does not warrant "handling" the individual.
To me it sounds more like just being honest, speaking the first thing that pops into one's mind without the socially appropriateness filter keeping it from moving from thoughts to vocalization.
"Get it together" sounds to me like someone trying to fit in, be more social, but in a way that seems inappropriate to most of us.
If there's room to allow a bit of this from this student, without it seriously impacting other students, it might be worth considering just ignoring it, especially if their participation and progress is otherwise within an acceptable range.
Perhaps "handling" the student in some specific way isn't really necessary, and working on not letting it bother you, or even trying to see the humor and honesty in it might be a better approach. That's based on the question as written, there could be extenuating circumstances of course.
Upvotes: -1
|
2022/12/19
| 2,449
| 10,216
|
<issue_start>username_0: I left academia years ago and am self-employed, but haven't stopped refereeing papers for one of the journals I published a few papers in. Now I've been informed by the editors that I've been selected as a distinguished referee. They are going to send me a certificate and publish a note honoring me and the other distinguished referees.
Here's the catch: I haven't updated my institutional affiliation on their referee portal since I left academia, so they believe I am still employed by the last institution I worked at. And they are going to include that institution in the honoring note as my affiliation.
I'm afraid that if I notify them that I am actually not employed by any institution, they will change their mind and withdraw the award from me, for it would look super weird if they published a note honoring someone outside academia as their distinguished referee. To inform me about their decision to withdraw the award, they might just refer to some allegedly existing internal rules so that I couldn't really complain.
But if I don't inform them about that, it will create a potentially troubling situation both for me and for them, for the institution might react, "Hey, this guy left us ages ago, so would you please publish a correction."
I really want this award, so I want to avoid any risk of getting it withdrawn. After all, I've been selected on the merits of my reports and have done great service to the journal. And the award would look really nice in my CV as a recognition of my expertise.
What should I do? Is there indeed a risk of getting the award withdrawn if I play it straight and tell them I am not employed by any institution? Should I rather ask my last institution whether they don't mind if I use them as my affiliation to get the award? Or should I try to arrange some affiliation with some institution asap, e.g., as a self-funded visiting scholar?
---
**UPDATE:** I feel I have failed to bring my point across, so I'd like to explain why exactly I think the editors may withdraw the award. Of course, the editors have no doubts about the quality of my reviews. It was the main criterion. The real problem is this: how will it look if the editors, in effect, publicly admit to sending manuscripts for review to an individual who is outside academia and not affiliated with any institution or even a business company? It raises questions and thereby somewhat undermines the reputation of the journal. To make things worse, I haven't published anything since I left academia years ago. The editors know about my expertise from my reviews, but how can the scientific community know? So, I'm afraid the editors may get simply unwilling to trigger unnecessary questions or rumors in the community by giving me the award.
More broadly, I think the honoring note is not only a thank you to referees, but also a way for the journal to show off, "Hey, look who is refereeing for us - Great People from Great Institutions." It's supposed to add to the reputation to the journal. And giving the award to someone outside academia does just the opposite, no matter their actual expertise. It's simply a move against the reputation of the journal. My last institution, where the editors believe I still work, has a big name, and I'm afraid this may have been a hidden factor. Also, I went through the list of all past recipients of this award and found no one unaffiliated with any institution.
So, despite the three answers suggesting I should play it straight, I am still not really convinced it's the wisest move. There are relatively ethical workarounds mentioned in my question. I even think that telling nothing might actually be not a bad idea. Even if the institution asks to publish a correction - which is not very likely - the award will have already be given to me, and I will have no problem admitting to having left the institution and having failed to update my affiliation data. After all, I did work there.
I hope my point makes better sense now, and I'd appreciate more insights, especially from people who worked on editorial boards.
---
**UPDATE 2:** Thanks for all of your answers. I am going to play it straight. And it has occurred to me there's one strong reason for the editors not to withdraw the award: if they do, this will put them in bad light if I spread information about it.<issue_comment>username_1: Clearly you need to tell the journal that you have left academia and no longer have that affiliation. And I think you should apologize for leaving it on their referee portal.
I think the chance of them withdrawing the award is low, as you did the work for which it is being awarded.
I don't think you should ask your former institution whether they mind, or try to arrange a visiting scholar affiliation. These things would not be honest.
Upvotes: 4 <issue_comment>username_2: You overestimate the impact of your institution or being an academic. People invite you to review articles because you are an expert in the field, not because you are a professor or work at Big Name Institution. Conversely, being a professor or working at Big Name Institution does not automatically make you a good reviewer.
The chance of your award being withdrawn is effectively zero, and the only thing that is likely to change is that they will update your affiliation in their honoring note (the name of your current company should work).
**Edit:**
>
> Of course, the editors have no doubts about the quality of my reviews. It was the main criterion. The real problem is this: how will it look if the editors, in effect, publicly admit to sending manuscripts for review to an individual who is outside academia and not affiliated with any institution or even a business company? It raises questions and thereby somewhat undermines the reputation of the journal. To make things worse, I haven't published anything since I left academia years ago. The editors know about my expertise from my reviews, but how can the scientific community know?
>
>
>
These things really do not matter. Nobel prizes have been awarded for research that was undertaken outside of academia ([example](https://en.wikipedia.org/wiki/Ivar_Giaever#The_Nobel_Prize)). Currently we are also seeing cutting-edge research coming out of e.g. Google DeepMind. The fact that these researchers work outside of academia does not affect their ability to conduct peer review in the slightest.
If you left academia years ago and have not published since, then that is a problem, because it indicates you will eventually cease to be an expert in the field. However, that has evidently not come to pass (or you are still working in the field even though you are not publishing), since the editors still find your reviews valuable.
So yeah - don't worry, the only thing that's likely to happen when you tell the journal is that they will update your affiliation in the honoring note.
Upvotes: 6 <issue_comment>username_3: I worked on editorial boards of 3 journals (for the record, my journals do not give referee awards). Academic/business affiliation is completely irrelevant for us when choosing a referee; what matters is the relevance of person's prior work (typically in the form of publications) to the paper. That's all.
Regarding specific questions:
1. "How will it look if the editors, in effect, publicly admit to sending manuscripts for review to an individual who is outside academia and not affiliated with any institution or even a business company?" - it will look just fine. A journal is judged by the quality of its publications. The community knows all too well that sometimes referees/editors make mistakes and accept faulty papers that have to be subsequently withdrawn. As long as your journal does not have this problem, everything is OK.
2. "It raises questions and thereby somewhat undermines the reputation of the journal." - no, it does not in the slightest.
3. "To make things worse, I haven't published anything since I left academia years ago." - this is irrelevant.
4. "The editors know about my expertise from my reviews, but how can the scientific community know?" - scientific community judges the quality of journals by other parameters. Typically, the community trusts editors to do the right thing. This trust is undermined by faulty publications, but that's a separate issue.
Upvotes: 5 [selected_answer]<issue_comment>username_4: >
> I even think that telling nothing might actually be not a bad idea.
> Even if the institution asks to publish a correction - which is not
> very likely - the award will have already be given to me, and I will
> have no problem admitting to having left the institution and having
> forgotten to update my affiliation data. After all, I did work there.
>
>
>
Sure, you did work there, and you forgot to update your affiliation. But now you admit it here, and you admit it to yourself, so you honestly cannot say anymore "I forgot about ...".
First point: why a distinguished referee is so important for you? It seems it is more important for them, to show they have referees from BigName Universitites.
Second point: the saying "I forgot to update my details for the past X years" it is an extremely poor excuse, it will reflect extremely badly on you (a reliable referee one that is withholding such informations?) and on the journal (no due diligence checks).
If you really have a strong need in having the reviews you peformed publicly recognized, join Publons and try to be the best in the metrics there ...
Upvotes: 2 <issue_comment>username_5: For whatever it’s worth, I did a brief investigation of the APS’s [list of outstanding referees](https://journals.aps.org/OutstandingReferees). This list is long and it doesn’t surprise that most people are from universities and similar, but it’s also not so difficult to filter out lines containing *Universi…, Institu…, College,* etc. You then end up with a list that is small enough to manually search. The remaining affiliations contain amongst others:
* Daimler
* Hewlett-Packard
* Unilever
* General Atomics
* Raytheon BBN
* OEwaves Inc.
Thus, at least one publisher did grant referee awards to people in industry.
Upvotes: 4
|
2022/12/19
| 597
| 2,435
|
<issue_start>username_0: I have submitted an article to the International Journal of Surgery, and it has gone into production. Just as I was signing the licensing agreement, it caught my eye that the name I had put in as the main author only contained my surname and my last name. **It did not contain the middle name that is present on my national ID card, as well as my University ID.**
E.g, The name I put in the manuscript: **<NAME>**
The name on my National ID: **<NAME>**
It is a thorough mistake on my part, and I should've paid better attention.
I have no clue what to do now. The deadline to sign the agreement is today. What do I do?
A, Send an email to the producer to change my name in the manuscript **before** signing the licensing agreement.
B, Detail in the email about the mistake and ask the producer to change the name **after** signing the license? **(Note: the license also contains my non-ID name e.g <NAME>)**
**C, It's not necessary to add the middle name, it's fine if the name on the manuscript doesn't match my ID name.**
Will be tremendously helpful if someone can guide me on this, and quickly.<issue_comment>username_1: Sign the agreement with your full legal name, *e.g.* <NAME>.
If you want the author name in the publication changed, email the editor with that request, but be prepared to be told that it's too late.
**Edit in response to comments:** In general, a legal agreement should be signed with one's legal name. My (non-lawyer) opinion is that it won't really matter in this case, but if it were I, I'd use my full legal name.
As others have said, publish with any name you like... with certain limitations like not impersonating others. It's probably best to pick a name and use it consistently on your publications.
Upvotes: 2 <issue_comment>username_2: If your paper has only just gone into production, you should be fine - just tell the publisher when they sent you the proofs that your name is incomplete, and give them the full name. If they have already sent you the proofs (and you approved them), then chances are it is too late, but you can still try. Again, write to the publisher and give them your full name (or whatever name you want to use).
Don't email the editor. This has nothing to do with peer review and is not handled by the editorial board. If the editorial board need to be involved, the publisher will contact them.
Upvotes: -1
|
2022/12/20
| 1,235
| 5,128
|
<issue_start>username_0: How much in practice would someone minoring in a subject end up knowing about the subject as opposed to somebody majoring in it? How much extra work does it take to get a minor?<issue_comment>username_1: There are at least two reasons to minor. One is to get some background in a field complementary to the major: say, chemistry for a biology major. Math and philosophy might actually go well together.
The second reason is just because you have an outside interest in something: literature for a biology major.
Some people get a minor because the required courses at a liberal arts college already contain most of it any way, so, why not. My philosophy minor was like that.
Some people take more than one minor. Actually double majoring is also possible many places.
The requirements for a minor might be approximately half of what they are for a major and might include a few upper level courses.
Some people just want a broad education. This has been true since the Renaissance, I'd guess.
---
Background on the US undergrad system.
A bachelors in the US is almost always four years, doable by a few in less and a few other students requiring more if they have difficulties.
Things are measured in "credit hours" and a credit is (most, not all places) about 15 hours of instruction, or an hour a week for fifteen weeks. Courses vary in how much they count, with three credits being typical but with variations in every program. Labs might require two hours per credit. Some courses are just one credit. There is no standard, but the above is common.
Students take about 15-16 credits per term for 8 terms (semesters), so the requirements for a degree are around 8\*15 = 120 or so credits, maybe a bit more.
A major is usually a bit less than 60 credits and the rest of the courses/credits are general education requirements: history and such in a chem major.
So, it doesn't take "four years" for the major, but four years for the degree. But the prerequisite structure of the courses implies that the major courses are spread out with two or three taken per term.
A minor, on the other hand might be 30 or so credits, but the prerequisite structure spreads them out, but perhaps ten courses of 3 credits each taken over 8 terms, or 4 years. Some of the courses in a minor, indeed some in a major, might be required for a bachelor's in any case. So, science is required for many (most?) degree programs and so a chem major can do those in chemistry and get some of their "major courses" by taking the entry level science in chemistry.
Squeezing in two minors in addition to a major is a bit hard, as are double majors. It probably requires overloads and hard work. I once successfully completed 21 credits in a term where 15 was the norm. But I tried 20 in the following term and had to drop one as it was too intense and wound up with 17 that term.
Note that in both the major and in the gen ed requirements there is room for some choice of courses. They aren't rigidly prescribed. Some requirements are just to come up to the total successful credit hour requirement. So, using these options you can take courses in your minor or your major or (for some) whatever you like. There is a lot of flexibility. So, two student math majors might overlap in only the core required math courses and the core required gen ed courses, normally taken early on.
---
A few places like Dartmouth are on a quarter system with four quarters spread over the entire year. Most students there study for three of the four terms/quarters. The individual terms look different but the totals come out to be pretty equivalent. It is just in how you slice the apple. Since the summer quarter is just like the others, it makes better use of campus facilities overall. Faculty likewise normally teach in only some of the quarters, but when I was a visitor there (long ago) the standard was four courses per year.
Upvotes: 1 <issue_comment>username_2: >
> How much in practice would someone minoring in a subject end up knowing about the subject as opposed to somebody majoring in it?
>
>
>
Things vary of course. But a minor is maybe 5 classes whereas a major is 10+. So, about half as much. This can be useful if you only care about certain parts of a subject -- get a chemistry minor by taking physical chemistry + inorganic; you can skip organic and biochem.
It's also sometimes true that if you major in science and minor in certain humanities subjects (or vice versa), you can cover almost all of your gen eds this way. So some people can get a minor "for free" by choosing classes strategically.
As for how useful this is in practice: probably not much. It's a good way to document an interest; i.e., anyone can say "I'm interested in writing," but having a minor in writing shows that you mean it. And of course if your minor means you take more classes than you would have taken otherwise, then you'll know more stuff, and that is usually good at some level. But I suspect that few hiring or admissions decisions would have gone the other way but for the presence/absence of a minor.
Upvotes: 2
|
2022/12/20
| 1,203
| 4,802
|
<issue_start>username_0: I'm in a situation that I've never faced before: I believe that I've found the solution to a classical mathematical problem and would like to get the solution published in a reputed journal. The solution might be wrong as and I'm willing to accept that because I've not shared the solution with anyone yet, but if it is correct (which I hope), it is good enough to be presented in a good journal.
The problem: I want to pursue my masters next year. I'm not a prodigy; hence I've applied to a few good universities but not to the top ones. The journal that I seek to publish my paper in takes 14–18 months on average to accept and a similar time to publish it. Hence, if I want my research outcome to help me apply in some top universities, I'll have to wait that long, hoping my paper gets accepted and published.
I'm a bit protective about my work and wanted to share it with the reviewers of the journal only, but then I got to know about arXiv, where I can publish my paper and it will be visible to everyone. The journal I seek to publish in accepts arXiv-published papers, so that should not be a problem.
Can you please suggest if it is advisable to publish a paper with a potential solution to a classical problem openly on arXiv and then use the arXiv link to share my paper and convince the selection committees about by candidacy?
Is there any other solution to this problem that you'd recommend? This is first time I'm going to publish a paper.<issue_comment>username_1: I don't know how admissions works in India so this might be good advice or not.
You don't seem especially certain of your proof. That worries me and normally one would then want to work with a colleague on the issue, and probably share authorship.
However, you can submit the article. If it turns out to have obvious errors you will probably hear back quickly. If they put it out for review you have some chance of success, though it takes time as you note.
While it is out for review, you can have a section of your CV named "Work in Progress" and list this work there. Such a section is valuable to have on a CV most places and helps with admissions but I don't know about India.
Now when you apply for study and send the CV, you might be asked to share the paper with someone in the admissions process. You can do it then, privately, without publishing it on arXiv.
If the journal you send it to doesn't accept it for reasons other than fundamental errors then you haven't compromised it by informal publishing, which other journals might not accept.
Note that publishing on arXiv might get you some feedback on it and attaches your name to any ideas in it, but it will be out there for people to see as it is really a form of publishing.
Upvotes: 2 <issue_comment>username_2: >
> Hence, if I want my research outcome to help me apply in some top
> universities, I'll have to wait that long, hoping my paper gets
> accepted and published.
>
>
>
There are so many ifs (if you submit for pubblication, if it is accepted, if it will be published in 18 months, if it will be considered by the comitee), what you are thinking is too much of a stretch. If you have a publication, it will help your access to university.
* Do you have publications as of today?
No.
* Will you have publications 2 years down the line?
Maybe, so be on the safe side, stop dreaming, consider it a no.
>
> I want to pursue my masters next year. I'm not a prodigy
>
>
>
I am just a random guy in the internet, but you are focusing on having a paper published to access a master, when the real question is "what will you do with your master degree since you are a not-a-prodigy-maybe-a-median student"?
Again, thinking about having a paper published in 2 years and getting admitted to the university is the combination of two unlikely events. But it is even more unlikely than "after my average master in an average university I will pursue a PhD in maths, possibly abroad".
Apply for the university you would like to. Plan A is that you get accepted. Unlikely. Therefore, prepare a plan B for the possible rejection. What can you do in the next 18 months to improve your chances of admission? Or what can you do in the next 18 months to sustain economically yourself? maybe you do not need to work full time and you can work on your paper. With that we go to:
>
> I'm a bit protective about my work
>
>
>
Then put it on arXiv. Having it on arXiv, it will be quite difficult to protect your intellectual property, but without having such a timestamp from arXiv, it will be almost impossible, given your conditions (no affiliation, no previous work, no contributions to conferences&co.)
Sorry being abrupt, you are right in pursuing your dreams, but only if you can afford them.
Upvotes: 1
|
2022/12/21
| 667
| 2,855
|
<issue_start>username_0: Once in a while I find out that an interesting conference in my research area has taken place, but I did not hear about it. Usually when I find out about upcoming conferences it is when a friend, colleague or collaborator lets me know. But sometimes interesting conferences are organized by people outside my immediate research circle, so I don't hear about them until it is too late. Is there a way to keep tabs on what conferences are going on in a particular research area? My area is math/mathematical physics, but I imagine this is a question that academics of all fields might be interested in.<issue_comment>username_1: At my university, there is an internal website (and staff!) devoted to precisely this issue. The site shows the conference topic, conference website link, dates of the event and submission dates for papers. The great thing about the list is that it is curated, so one does not have to wade through an overwhelming number of pages about conferences that are either scams, or not well respected.
I know that my university is not alone, at least in my part of the world ... so it would be worth your while to inquire at the office of the VP Research at your own university about something similar.
You might also have a look at [this page](https://guides.library.uwa.edu.au/keepingresearchcurrent/conferences) which is a list of links and guides to various conferences. It's not as well curated as my university's internal list, but it's a good start.
Upvotes: 2 <issue_comment>username_2: For me, a common way to find out about conferences and workshops in my area stems from being registered to a few mailing lists. These often also cover job advertisements for my field.
In my experience, most events are either part of an annual conference series or by invitation only. For the latter, you'll obviously find out if you are invited (by receiving the invitation), and don't really need to know it otherwise. For the former, once you've learned about the conference series once, you can actively check for the next years edition. It's common that people are aware of stuff like "The deadline for conference X is usually in January", which then informs a more specific Google search.
Upvotes: 2 <issue_comment>username_3: Here is short list of tips:
* Ask your supervisor. They should know a lot of upcoming conferences.
* Learn about the "yearly cycle" in your field. In which month in which conference, when are deadlines usually…
* Subscribe to mailing lists in your field. Follow some societies or active people in your field on social media.
* Do regular searches on the internet (e.g. every two or three months.
On top of that: You will get familiar with more conferences over time and you'll build a network where you'll get information also about smaller meetings.
Upvotes: 3 [selected_answer]
|
2022/12/21
| 1,734
| 7,677
|
<issue_start>username_0: I was reviewing an application for a grant and found out that one of the applicants has included a publication on his CV that does not exist in the journal. It was supposedly published several years ago. I counterchecked the list of publications of that particular journal, including the issue and volume, but it's non-existent. I also googled the title and checked the Google Scholar account of the applicant -- the paper could not be found.
Is this an academic offense? How can I investigate further, and what should I do if I am ultimately unable to find any proof of this paper's existence?<issue_comment>username_1: For clarification, you may consider asking the applicant directly. There might be a silly explanation that avoids all confusion.
Of course, depending on the answer of the applicant, you may take the necessary steps.
Claiming something that doesn't exist is of course a very serious academic offense. That is why you should not hesitate asking applicants in case you have doubts or things do not add up. Something polite along the lines "We were checking your CV and we were wondering where we can find your such-and-such publication."
Edit: I answered without looking at the comments. I still believe that directly contacting the author, either personally or through some other channels of the university (e.g. secretary) is possibly the only course of action that can clarify things at this point.
Upvotes: -1 <issue_comment>username_2: There is a mismatch between your question and the content of the question. From the content of the question, it looks like you could not find the problematic publication. However, the question you pose is already assuming that you did everything that is possible to find the publication.
First: ask yourself (and SE:Academia community) if you really did everything you could to find the mentioned paper.
Second: since we live in a civilized world (or at least we try to have it that way), there is one important concept:
[PRESUMPTION OF INNOCENCE](https://en.wikipedia.org/wiki/Presumption_of_innocence)
So unless you have sound proof of the publication being consciously and delibaretely fake, you should take a more neutral approach to the matter (i.e., replace the *false* word from the title).
One final consideration:
>
> "checked the google scholar account of the applicant"
>
>
>
Google Scholar (GS) is just the name of a data-harvesting service, powered by a very obscure algorithm developed by an advertising company called Alphabet. If an item has not been indexed by GS, it does not mean the something does not exist. And even if an item is indexed by GS, you still should have reasonable doubts about the accuracy of the item.
If you really want to use efficently your time when looking for articles&co., the same 5 minutes you spend on the internet in Google Scholar, they can be more efficently spent by sending an email to your trusted librarian(s).
Other approaches may include contacting the co-authors, contacting the author, but those may be not viable because of independence or confidentiality of the application review you are performing.
Upvotes: 4 <issue_comment>username_3: Do not contact the applicant directly: There is a reason why reviews of grant applications are anonymous, and you would be undermining the refereeing process by such an action. Instead, contact the funding agency that solicited your review, explain the situation and ask them for advice. They may choose to contact the applicant or another course of action.
For the record: I reviewed numerous grant applications to various funding agencies from the US, Canada, Latin America and Europe (three per year on average). I never encountered a grant application which contained a reference to a nonexisting paper.
Upvotes: 6 <issue_comment>username_4: You only really have a few options, if the granting institution is "typical".
1. Your best option is to contact the person coordinating the review with your concern (in the NIH hierarchy, that may be a program officer or scientific review officer).
2. You can ignore the issue.
3. You can hold hold on to the issue, saving it for synchronous discussion if the review panel functions that way (I suspect anyone else on the panel would be miffed that you didn't raise the issue with the review officer if you did that, and the review officer would probably not be happy either)
FWIW, if a grant reviewer contacted me [the applicant] in any way about the review of one of my grants during the review period, I would immediately contact the Scientific Review Officer for the study section, and maybe (after reading up on the rules) the study section chair. I'd probably request that the reviewer be placed in conflict for the grant, which means (for the NIH, anyway) that the reviewer would not even be allowed to be in the room for the review, let alone submit anything in writing that the rest of the panel had access to. I suspect that request would be honored. I also suspect that, at the very least, the panel administrators would explain the error of their ways to the reviewer involved (at the very least).
Sitting on top of this analysis is the realization that many people in the room are looking for an excuse to assign a grant an unfundable score (not that I approve of that), and raising this issue during the review may well poison the grant for funding.
Upvotes: 4 <issue_comment>username_5: It seems to me that there's always an onus on the applicant to correctly specify all the necessary details of his publications: the stuff we were all told to quote as undergrads: journal title, paper title, authors, vol number, issue number, page numbers x - y.
You imply that the journal vol and issue numbers provided didn't yield any paper by the applicant. You can put his application on hold pending more details being provided *urgently* from the applicant.
I don't think it's your job to hunt down other sources resulting from typos, author omission or similar journal titles - you have enough to do and it's the candidate's job to get that right.
Upvotes: 2 <issue_comment>username_6: @username_2 has mentioned presumption of innocence without given context about how this has happened. Here is one very possible scenario where this is just a mix up:
When there are multiple authors it is easy to get some details wrong, or mix them up. The paper could be published in another journal even with another name. It is entirely possible for other authors to change the title and send paper to another journal after a reject. It is also possible for the author to forget or even never noticed about this issue. At a later date, other authors might have sent an email stating the paper is published and the author has included the reference to the paper, in the original form to their CV.
Upvotes: 2 <issue_comment>username_7: I often review grant applications for the EU. I regularly encounter this problem, which is why I always spot-check one or two of the references given. However, I am prohibited from contacting the applicant, so I take this up with my contact at the agency. I have also found blatant plagiarism in grant applications that caused the applicant to have to refund money from a previous grant. Sometimes the agency takes no action (a woman in a recent application had an entire paragraph referring to the PI as a man, which made me think copy/paste, but the agency said: that's okay). Sometimes they downscore the application. Sometimes more action is taken, but that is beyond the scope of my review.
So my advice for applicants ist: Make sure your references are findable and correct!
Upvotes: 1
|
2022/12/21
| 2,992
| 12,883
|
<issue_start>username_0: I am a PhD candidate in Teleinformatics Engineering and, during my academic career, I was required to learn English in order to write articles, communicate with partners around the world, read books, and so on. It has been a quite useful language. I suspect that it would not be mandatory to learn a third language, though. Some friend of mine learned a third language due to a situational case. After that, that knowledge seemed to become obsolete (I'm not sure, actually) since they came back to my country (Brazil).
I wonder whether learn a third language is really useful for scientific purpose in Engineering, or it is superfluous and it is better to concentrate on Mathematics only. If there is a third worthwhile language, what is it? I've heard that there are many books in German but, honestly, I've never needed to read anything written in German, it is always in English.<issue_comment>username_1: There might be a lot of reasons to learn another language (love of poetry, say) but for purposes of work, I'd guess that it is probably a waste of time and effort to do so speculatively. After all, you might choose the wrong language and spend time and effort that might be better spent elsewhere.
If you have a need that can't be otherwise met, then yes, learn the other language. For example, auto translators seem to do a terrible job on math papers, not being able to deal with either the symbols or the specialized terminology. In other fields it is probably not so much of a problem.
And, I'll guess that translated poetry isn't very poetic in the result.
But don't let a "hobby" interfere with your mainline pursuits. There is no reason not to have a bit of fun in life, of course.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you want to see it from the purely pragmatical point of view, there are few languages that are worthwhile because of the heavy investment in R&D from the respective cultural area: French, German, Spanish, Chinese (mandarin) or Japanese (no offense to the not mentioned ones, feel free to add in the comments).
Please note that Spanish (South America, Central America, Spain) is much more heterogeneous than let's say German (Germany, Austria, Switzerland).
If you learn one of these languages, you will realize how many technical and academic positions are available having prior knowledge of the language. Instead of competing with the "open to foreign applicants" excellent openings, you will be able to compete also with the "open to who can understand the language" very good openings. It is a fine balance between demand and offer. There are many more of such opening in Austria (offer larger than demand), with much less competition than similar positions in California (the demand is an issue: even if there are more openings, there are too many qualified potential candidates) or in Morocco (the offer is an issue: there may be few candidates, but openings are even less).
Of course then you can be perfectly proficient by relying only on "english knowledge", but you may end up in trouble when you have responsibility role and the language defining duties and rights is the 3rd language and no reliable informations can be obtained otherwise.
Final remark: please note that the discussion is quite abstract. although we are immersed in an English speaking world, it is very hard to learn English without an emotional bond. Learning the languages mentioned here above only on the basis of potential career: it is basically impossible. You need an emotional bond (for example, but not limited to: the country is hosting you as a refugee, your relevant other is mother tongue from one of the languages, you love the culture expressed by that region...)
Upvotes: 2 <issue_comment>username_3: I agree with username_2's answer but have a few points to add. (Caveat: English is my fifth language.)
1. The answer very much depends on the effort you need to make to learn the new language: if languages (or at least the specific language you intend to learn) come easily to you, the win/cost ratio is very good and you should go for it. If you had to make a huge effort to learn English, then you are in a very different situation.
2. Beyond the advantages listed by username_2, there can be subtle benefits to a third/nth language: it is easier to make connections and build your network. People often think of others who speak their language as more similar and likeable, it can start conversations. Obviously, speaking only English will not exclude you, but in my experience, other languages can make new opportunities appear.
3. You do not need to be proficient for the benefits to appear: sometimes even being able to read an abstract or have a short conversation during a coffee break in a different language can be an asset.
+1 (this is personal and may not apply): for me, thinking is language-dependent: I think differently in different languages (because of the vocabulary, word associations, grammar, syntax, etc.), so I sometimes gain new insights and inventions when I translate ideas back-and-forth.
Upvotes: 3 <issue_comment>username_4: If you are semi-fluent (e.g. able to read paper abstracts) in German when you finish your doctorate, then you can apply to Germany, Austria, Switzerland plus parts of Belgium, Italy and Brazil for fellowships or non-academic research jobs.
It will be useful in getting stuff published in German-based academic journals.
A lot of your decision may depend on the extent of research *compatible with your own interests* in the aforementioned countries, as well as your own feeling about working and living in them.
Maybe you can find some PhDs or fellows in your current university that may be able to give you a better appreciation of work/life in those countries.
Upvotes: 2 <issue_comment>username_5: In addition to the answers already given:
1. Governments and universities may be keen to grow links with specific countries (e.g. China, as a lucrative source of students, or developing countries as a form of economic support). If you speak the relevant language(s) and are willing to invest time on networking and building the desired links, this can be an effective route to promotion and (one flavour of) career success.
2. Certain research fields may benefit from knowledge of a specific language. For example, in many social sciences and humanities fields it may be helpful or even essential to be able to interact with people (and written sources) in their native language. More prosaically, with many telescopes being located in South America, it may be advantageous for an astronomer to learn Spanish in order to make visits easier and more productive (even if academic colleagues speak good English, one needs to interact with taxi drivers, hoteliers, facilities staff, etc.).
Upvotes: 2 <issue_comment>username_6: Recently on the job hunt as a manager \ Lead engineer. I got disqualified in one case because I didn't know Hindi and in another because I didn't know Spanish. So a conversational level of a language may help a lot later in your career. World's getting smaller by the day. Employers are looking world wide for talent and find it all over. The more languages you speak the more potential talent you can tap. The reign of English I fear is coming to an end. Might want to consider Mandarin.
Upvotes: -1 <issue_comment>username_7: If you ever intend to move to another country earnestly (i.e. to actually live and work there for a significant time, not only for a few months or a year), I certainly do recommend to learn the language as well as possible. Definitely well beyond the level required in the coffee kitchen.
I work in a German IT company (but a subsidiary of a global concern), and we have a great many employees from abroad; both as employees locally, as well as frequent partners in mixed-shore projects. Yes, we use English wherever necessary, but at some point there *always* are situations where failing to have a rock solid grasp of the local language is difficult for all involved. In IT, with our skills shortage, it's not really an issue, but in an industrial sector with less splendid job chances, this might just be the tipping point where your favourite company takes the other applicant who may not be as great as you, technically, but who has a firm grasp of the language.
Regarding *when* to learn that language, I'd say as early as possible - so as soon as you know that there's a certain chance that it may happen. The effort you would put in should of course scale with the certainty of you ever needing to actually use it. But starting early means you'll be younger (everything is learned easier when you're young), have more time until you need it, and can learn with less pressure (which is very valuable). You might simply start viewing movies and reading books in the language; things like that.
If you just want to open up possibilities, you can also simply pick any language of a country you like for vacations - even just chatting with the locals can make a stay that much more enjoyable.
Finally - after 3 or 4 languages, learning even more seems to get easier and easier, as you can transfer knowledge from one to the other to some degree. So even if you don't actually need that additional language now or ever, if you ever should come into a dire situation where you must learn a further one, you might be very thankful.
All of this assumes that you find *something*, even if it's just a tiny aspect, that is somewhat interesting to you. Do not waste effort on learning something you're not interested in whatsoever - that would just be futile anyways.
Upvotes: 0 <issue_comment>username_8: It depends on how easy you find the language to learn, whether you have people around to help you, how much of a struggle it is and how much benefit you will get from it. If you find the language extremely easy to learn and it is beneficial to know it, I really see no excuse not to learn it.
If it is genuinely painful for you to try to speak the language and you genuinely don't enjoy learning it, I guess it's forgivable to skip learning it. For example, I studied Arabic as a third language to live in an Arabic-speaking country and did not find it that difficult, as my natural ability to pronounce it seems to be quite good. Also, learning Arabic wasn't that bad after studying both English and Spanish, as Arabic has a lot of sounds which transfer over quite nicely from those two languages.
Other people truly struggle to learn Arabic or find it impossible, and if it's not strictly necessary to use it at the university where a lot of staff are foreigners who don't speak much Arabic any way, then it's forgivable to not expend a lot of time and effort learning Arabic which could be put into research. I still know for sure that those people who do not learn must run into problems quite a lot when they travel around the country outside academia without knowing the language. I personally think the effort put into learning the language outweighs the problems which occur if you don't speak it.
I have put a lot of effort into learning Spanish in the past and I have to admit that it has not really had a big (or even any) impact at all on my scientific career, as an extremely small number of articles are written in Spanish and I don't even travel much to Spanish-speaking countries. The main benefit has been that I can read famous novels from those countries in the original language, listen to podcasts and radio shows in Spanish which offer different perspectives from those which I usually hear and that I have made good friends from those countries.
Upvotes: 0 <issue_comment>username_9: I am bilingual and have been using English as my main written language for 20+ years (not that it makes me a better speaker of English, but it hopefully adds specific vocabulary).
The main advantage I get from these languages is that I often know a word in one of them but forget it in the other(s) (having known it before). A quick trip to Google Translate resets my memory. Since GT is good but not excellent, checking both "*Language\_I\_know\_the\_word\_in\_1* → *Language\_I\_want\_to\_use\_the\_word\_in*" and "*Language\_I\_know\_the\_word\_in\_2* → *Language\_I\_want\_to\_use\_the\_word\_in*" sometimes helps to identify/recall the best match.
This works best when you **actively** use all of the languages, ideally in different contexts (professional, hobby, family, extended family, books, TV, ...). Your brain is then wired to some words in some languages and you can afford to temporarily forget them in others.
The key word here is **actively** (or proficiently). I also somehow speak two other languages I was forced to learn at school and, well, apart from helping me to buy bread (when I am lucky) it never really made any difference.
Upvotes: 0
|
2022/12/22
| 1,457
| 6,504
|
<issue_start>username_0: Many years ago (when I was young and naive), I accepted a job offer (rejecting two other offers, with some acrimony from one of them) and moved to a new country (Chile) in order to take the job, only to realize after arriving there that I did not have a work contract, just an invitation, and that I needed to enter a 4-6 month process in order to (pay to) validate my PhD diploma with a local university and get official ID papers in the new country, before the administration of the university would even consider hiring me. I was paid in the meantime (albeit the administration explained to me that my salary would be decided retroactively after being hired, and that the remaining money would be considered an advance on my research expenses, something which was quite uncomfortable to me) and eventually got hired, but the surprise and the whole process was quite uncomfortable. I did not inquire about the legality of the process at the time, assuming that since this is a state university, it had to be an example of probity and follow the law to the letter.
This was not the first time that I had found myself in such a situation. Before this job, I had received (and refused) an offer in another country (France). After announcing to me in July that I had been selected for the job, the chair of the department explained to me that I was expected to move in in August (resigning from my current position with very little anticipation, breaking up my teaching engagements for September), in order to teach in September, even though the contract was only starting in October ("for administrative reasons"). As this was a permanent position, I did not immediately refuse the offer, but asked for the contract, to which I was answered that I would normally get it after I had moved there and started to work. I did not take the job.
I now applied for another position, in another university (still Chile), and was selected. When I asked for the contract (in particular before buying a house close to the new university and moving in), I was answered that the administration was trying to make things go faster for me, but that they usually were writing and signing the contract within the three first months of employment, i.e. after I arrived and started to work (but assuring me that I would get paid for my work anyway). When I signaled that I was uncomfortable with such timing (the administration of my current university wished me to resign without delay so that they can hire someone to replace me, which I was reluctant to do without a new contract), I was asked to "trust" the university, that in their whole history they had never failed to contract someone after selecting them for a job. I was given a letter promising that I would be hired on a given date, but mentioning neither the salary nor the rank that I was offered informally by email. Two months before my starting date, I still have nothing more than an email, this letter, and some whatsapp messages to confirm that I was selected for the job. We are setting up a "vacation without salary" with my current university to allow me to start the new job while keeping the possibility to go back if anything goes wrong with the hiring process.
I trust that I will be hired and I have no reason to suspect any "evil" intentions of any person involved in such interactions. But it seems to me that such "arrangements" are in total contradiction with the very spirit and motivation behind the writing and signing of a formal contract, which would be to forfeit the need of any "trust" between two parties. Instead, I feel that those institutions view the contracts as a mere senseless administrative formality imposed on them "from above" which they have to "go around" in order to "get things done". I am (only) half thinking about telling the new university that I will NOT move to the new place until I have a contract (and teach remotely if needed while waiting for the contract). [For practical reasons, I can't rent a place and buy later.]
Hence my question (sorry for the long description of the background): **How (il)legal and common is it for a university to make one move in and work BEFORE signing a contract**?<issue_comment>username_1: There is no general rule and the answer depends on the country.
I worked in **3 different countries** which are not my home country and in **2 of them I received the actual work contract after I started working** there.
But in both cases they gave me a 1-page pre-contract that stated the important parts of the contract (position, salary, ...). Often they cannot give you the real work contract (even if they really want to) because country specific laws require for example to have your country specific national identification number (or health insurance number or similar ...) on the contract which applicants from other countries simply do not have before they have physically moved to the country.
Upvotes: 5 [selected_answer]<issue_comment>username_2: For Germany, this seems very uncommon! It is not permitted to use department-specific software / engage in research activities here in Germany without a signed contract. In rare cases, one has to sign the contract on the very first workday. Nothing is possible before this has not been done.
Upvotes: 2 <issue_comment>username_3: If we ignore that this happens at a university: The normal case is obviously that you have a contract with a salary, you work the agreed hours, and you get paid the agreed salary. But working without a signed contract will in many countries create an implicit contract: If you arrive ready to work, they allow you or even ask you to work and then you work, then an implicit contract is created due to these facts, and you are employed and need to get paid. It’s legal *for you* as far as working etc. is concerned.
Upvotes: 2 <issue_comment>username_4: In the UK the law says that if you work for an employer, then you have an implicit, implied contract with them, whether you have signed a piece of paper or not. At any point an employee has the legal right to demand a written statement of the conditions of employment, as the employer understands them.
It's in everyone's interest that a written contract or document of terms is provided in appointment of an employee, as it saves lots of legal headaches down the line of the employer and employee have different understandings of the terms and a judge has to be brought in to decide who is right.
Upvotes: 1
|
2022/12/22
| 587
| 2,595
|
<issue_start>username_0: There are various assessments and you need to get a grade of over 50% in each. i wasn’t able to do that in one of them (I came close) although my overall grade is good. How can I convince my teacher to pardon me?<issue_comment>username_1: Extenuating circumstances exist, and need to be taken into account. However, when they occur you need to tell your instructor as quickly as possible. That way something can be done about it. Dealing with difficult circumstances is about creating an environment where students can show what they can do. If you inform your instructor after a test, then you don't give the instructor the opportunity to do that. All they can do now is lower the standard, which is in nobody's interest.
Upvotes: 3 <issue_comment>username_2: It might not be possible to get a "pass", but there is something you might be able to do in some places and with some professors.
Go have a face-to-face meeting, discussing your progress and what you might be missing. Ask if there is a possibility to do something extra to make up the deficit (and improve your learning). Some people would respond positively to this, rules and fairness permitting.
But the meeting will have its own value in any case, especially if this is your major subject.
Upvotes: 2 <issue_comment>username_3: All graded courses involve drawing a line between pass and fail. There is always some degree of arbitrarity in exactly where to draw this line, but having drawn it, it is generally good educational practice for academics to hold this line to avoid eroding academic standards. If there are genuine extenuating circumstances then you can put in a formal application for "special consideration" and it will be considered. However, the mere fact that you were close to a pass does not itself justify moving your grade (and hence, we will decline to assist you to convince your professor to pass you).
Rather than seeking a change in your marks, it would be a good idea for you to review your work in the course, identify the shortcomings that have led you to a failing grade, and do some additional study to remedy those shortcomings. If you learn the material better it will allow you to pass (or hopefully do even better) the next time you do this course. You should remember that this failing grade serves a valuable diagnostic purpose --- it tells you that you are not yet competent enough in this subject for the requirements of your degree program. Take that diagnosis seriously and read it as an indicator that you need to give more attention to this subject.
Upvotes: 3
|
2022/12/22
| 1,451
| 5,816
|
<issue_start>username_0: First: I want to do grad school, and I'm a undergraduate student.
Now, I have a really bad GPA. My probable final GPA value will be 2.0, and I'm an international student. Still, I have two papers published (more yet to come before I finish the undergraduate school), three scientific projects (two of them with goverment founding), participations on events and many minor things related with what I want to do in my research area.
I will need to enroll in another undergrad school to overcome my bad GPA. Therefore, I designed the following:
>
> 1. I enroll in a Msc program here in my country, and meanwhile I start a second undergrad school to boost my GPA.
> 2. I use the CV of this online grad school together with my MSc CV to persue the PhD abroad.
> 3. In the end, I will have many things AND a nice GPA.
>
>
>
Therefore, at the time I send the documents for PhD application, I will send a nice undergrad GPA CV of the second graduation, the prior things that I did on the first undergraduation (BUT NOT THE CV per se) and a nice CV of master degree. In practice, I will "mask" my grades from the first undergraduate school.
This plan sounds nice, but, I would like to ask:
**It will sound strange for the PhD application comission?**<issue_comment>username_1: What you are trying to do (erasing academic history) is a violation of academic integrity that may result in expulsion once it is disclosed.
To answer your question "Can they check it?". It is usually not possible to check the full education history of a foreign citizen. It is already a hard thing to do at Embassy-level, and even harder at the University-level. You may hide something, and they will never know.
However, by erasing a substantial part of your education history, it will make your application look really suspicious, especially when the gap is 4 years.
>
> What was the institution that you were affiliated when you were working on the scientific projects. (not affiliated & no previous qualification?)
>
>
>
>
> Why did you enroll into an (online?) undergraduate program and a graduate one at the same time without previous education? It is simply not possible for many reasons (not necessarily *time*)
>
>
>
>
> What did you do in that 4-year period?
>
>
>
People are not stupid. They will know immediately that something fishy going on here.
They may also contact the institution where you obtained your master's degree to inquire about the qualification you used in order to apply for the program.
Upvotes: 2 <issue_comment>username_2: Like the [advice of username_1 피카츄](https://academia.stackexchange.com/a/191869/75368) and Sursula -they-, I also suggest that you can't ethically hide your early performance and shouldn't try. Having a later degree revoked for fraud is one consequence.
But you ask how it can be overcome and there is a way for that. A poor GPA can be made irrelevant, even when known, by showing through various things like publications and collaborations is that you have overcome whatever deficit you might have had. You actually already show some of this activity and success, though it isn't clear from what you write that it is in the same field that you want to pursue.
I'll admit the possibility that a student can get bad grades and still learn. I think it is pretty rare and it is also pretty hard to convince people of it, but you might be able to bring it off. Doing badly in class if there are language issues can certainly happen, as can some learning disabilities that can be documented.
I've read reports, not all consistent, that Einstein wasn't a stellar student. But that might have been a result of the stifling education system he was stuck in.
Don't think in terms of two CVs. Think of a single one that shows all, including the warts as needed. But also show that you have the required knowledge and (importantly) insight to embark on research.
Upvotes: 1 <issue_comment>username_3: Agree with others – do NOT hide your previous degree. Yes, graduate schools require you to submit all transcripts, and falsifying your background is the surest way to end your academic career.
Instead, try to *explain the negatives honestly* and *accentuate the positives*. Specifically:
* Were there extenuating circumstances for your low GPA? Were you dealing with personal problems, e.g., health, financial? Did you improve over time especially by the last year? That helps to show your future grades are likely to be better.
* Were your grades better in core courses? That helps to show you have the potential to succeed in grad school.
* Talk about your research projects. How did you get the funding? Was it a competitive process? What were your contributions to the project? Don't exaggerate because every serious project involves a team, but do describe your specific role. That helps to show your research potential and ability to lead or work in a team.
* Most important: Papers are the best way to prove your research productivity. Conference proceedings are good too, as few undergrads have journal papers. Provide citations showing the full author list. Use arXiv or ResearchGate etc. to share a copy if it is not yet published or not easily accessed publicly. Note that a vague mention of something "submitted" or "in preparation" is meaningless.
Finally, consider your career options. You mention the MSc as a stepping stone, so I'm guessing you aspire to the PhD. Are you quite sure that is the right path for you? Yes, some people have succeeded in spite of bad grades, but it's a very uphill battle. Perhaps the MSc is all you need to get a good job? Talk to your local academic advisors to make sure you fully understand the reasons behind getting a PhD, and whether that is the right path for you. Good luck.
Upvotes: 1
|
2022/12/23
| 1,311
| 5,675
|
<issue_start>username_0: I got an offer for an assistant professor position in the USA starting on September 1st. Classes start on August 21st. So I am expected to start teaching at the university before I'm actually employed. Until this point, ok, I don't mind.
However, as I am a foreigner, I can enter the USA with my H1B visa (work visa sponsored by the university) only 10 days before the contract starts. 10 days before will be August 22 (remember: classes start on August 21!). I would prefer to enter the USA before August 22, to organize my life before I start the full-time position (rent a house, buy a car, hire internet, electricity, do all the moving, etc).
Is it possible to ask the university to sign my contract starting 1 or 2 months before September 1st so I can enter the US before August 22? I wouldn't ask for more salary for that 1st year, of course. It will be only an administrative solution for me to have a visa earlier.
Is it possible?<issue_comment>username_1: Of course you can and should ask the university to help you deal with this problem. They have done all the work so far in the hiring process, arranging the visa and writing the contract. It's in their interest as well as yours to straighten this out.
Let us know what happens.
Upvotes: 5 <issue_comment>username_2: You surely can ask, but I suggest that you do not ask for an earlier contract date, but for a solution for the real problems that you have.
That is, do not ask "can you just put a different date on my contract". This smells bad for me, especially if it is done explicitly to mislead border control. (Although if the university indeed wants you to start teaching before the contract starts, this smells equally bad for me.)
At the same time, the date on the contract is not your real problem. Your real problem is organizing your life. So, just explicitly tell the university that you need time "to organize your life", mention the visa problem, and ask for suggestions. You may find that the university has standard policies and processes regarding this.
Also, are you sure that the university actually expects you to start work before the official contract start date?
Upvotes: 4 <issue_comment>username_3: You and your new department are now partners: They want you to succeed in your job, and they need you to do what is necessary for their teaching mission. As a consequence, nobody is well served if the earliest you can show up is a couple of days after classes start.
So write your future department head an email explaining the situation: That visa rules say that you can only get there by a date after classes start, and ask whether the start date of your contract can be changed to make it possible for you to be there when you need to be there. That is clearly also in the department's interest, so should not present any difficulties.
(There is an ethical issue about an employer expecting you to teach starting ten days before your contract starts: They expect you to work for free for a third of a month. I consider this unethical. Whether you want to press that point is of course an entirely different question, but you don't have to: You have good other reasons why you can't start teaching when they want you to teach.)
Upvotes: 2 <issue_comment>username_4: Not knowing which university you will be employed at, I looked at several university websites regarding protocol for new employees requiring an H1B visa, including Boston University, UC Berkeley, and Missouri University, among others.
What I have learned is they have an International Student and Scholar Office or something similar, that petitions the US Citizenship and Naturalization Office for the H1B visa on your behalf. They supply information regarding you background, qualifications for the proffered position, and the position begin and end dates. They subsequently receive the approval form, Form I797, and send you copies of the petition, approval Form I797, as well as any information you might need navigating the process for obtaining your visa at the US Consulate or Embassy in your country, as well as information about traveling to the US.
Have you received these forms? If not, I would immediately get in touch with your contact person at the University and inquire when you will be receiving the forms, and if there's any other information they may be able to provide to streamline the process. They have advisors dedicated to international students and scholars and you may already have one assigned. I found departments dedicated to international students and scholars on the university websites. If you don't believe you have been assigned an advisor, you should check the school website and find that department.
From what I have read, the process to obtain the H1B Visa can take several months when everything is going smoothly; it can take much longer should an issue arise. The sooner you can get it done, the better.
When reading the information provided, it does sound like there may be an employment begin and end dates as well as contract begin and end dates provided on form I797. The employment dates may differ from the contract dates. I can only guess, maybe the contract runs by fiscal year, or maybe there's an onboarding type class you have to participate in prior to having your contract become effective, you have a different employment designation until then. It doesn't sound like a critical issue, just something needing clarification.
I hope this helps. Hopefully, if you have not yet received it, you will receive the petition, the approval, and an employee information packet that explains everything very soon.
Good luck!
Upvotes: 1
|
2022/12/23
| 395
| 1,581
|
<issue_start>username_0: I cited a paragraph of an article, which classified some model according to different categories. For each class it contains related work from the literature. Is it plagiarism that some citations match with mine?
Structure of the cited paragraph:
* Class 1 / related work
* Class 2 / related work
* Class 3 / related work
To summarize:
I cited that the structure is obtained from the article. I did not cite, that some of the related work for each class was contained in this article.
For example I cited some paper belonging to the first class, which was the first in this category.
Is this considered as plagiarism?<issue_comment>username_1: If it is someone else work, cite it or it is plagiarism.
If it is your work, let it be your work.
A lazy way would be see article xyz and reference therein.
From what you describe it seems to me that you need to cite every work for each class, but maybe you can simply cite that the classifciation of the models have been presented by xyz, and you are referring to model of class *n* only.
Upvotes: -1 [selected_answer]<issue_comment>username_2: Note that plagiarism is about misattribution of the *ideas* of others, usually to oneself. It isn't about words or structure, necessarily.
Nothing in the question seems to imply misattribution of any idea.
In particular, with regard to citations, if you are writing some paper that extends another then citing some of the same work as the first seems perfectly natural. Likewise for a review article. In fact, it might be plagiarism *not* to cite.
Upvotes: 2
|
2022/12/24
| 579
| 2,630
|
<issue_start>username_0: I'm a recent social sciences master's graduate based in the UK, and I'm interested in making PhD applications next year. I would like to take the year I have available to gain some kind of experience, ideally research experience to strengthen my applications. Do you have any advice on some steps I can take for this? I have been thinking about research assistant positions but there aren't many of them available, especially in the field I am interested in. Any advice would be greatly appreciated.<issue_comment>username_1: One possible option is to try to land a job in, perhaps, the library at a nearby university. My daughter did something like that. It not only gives you access to the library itself, but also to the faculty, if you can work out the hours. Some other position at a university might also be possible.
As you say, you probably can't land a research assistantship, but it is possible to still collaborate with people as an independent researcher and get a start on research. You need access and some time to invest. It is also good to have some ideas for possible research that some faculty member might be interested in. You need too avoid getting exploited, but a cooperative relationship isn't out of the question.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Think broadly!
To maximize your potential for finding a position, you can search for any position that has a research component vaguely related to your potential PhD topic. For instance, if your topic is women's labor force participation during recession, you should be very broad and look at positions related to women, to labor force participation, or to recessions. The position could be in a university or more broadly in a nonprofit or similar NGO.
If you can’t find work with a specific research component, you can also look at positions that would give you experience with the population you are interested in.
Upvotes: 2 <issue_comment>username_3: PhD admissions tutor here! First, you are doing exactly the right thing in looking for more experience. If you are applying for a funded studentship they are very competitive and you need to stand out from the crowd. If you can get a research assistant post in a vaguely relevant area that's great. But I also really like applicants who have a bit of non university experience - consider NGOs, market research companies, local government, civil service, industry. These will give you a different perspective on research and some new skills and contacts. Civil service in particular is often an excellent option if you can get in. Good luck!
Upvotes: 1
|
2022/12/24
| 1,349
| 5,252
|
<issue_start>username_0: *My field is Statistics & Machine Learning*
I have come to know that job prospects in academia are really (really) bad. Of course, it is discipline-specific, but it did discourage me from pursuing a PhD, not necessarily because of the imposter syndrome. It could be because of my initial expectation that I would stay in academia after getting a PhD.
Eventually, I decided that I would still pursue a PhD as it's still useful for me, but after getting the degree, I would work in industry instead of academia. At least, in my field, having a PhD will open many doors (e.g. data scientists). I still enjoy doing research (developing models and algorithms) and teaching mathematics. It is just that I think it's not a good idea for me to stay in academia after graduation. Therefore, I will also prepare myself for industry job market.
Is it a problem if I start my PhD with no intention of being an academic after graduation?<issue_comment>username_1: No, there is no issue about pursuing a doctorate intending to work in industry. The downside is that most jobs are in academia, however.
I know quite a few researchers at top international companies who have followed that path. They are quite well known and participate in professional/academic conferences and publish. But for every one of them there are many others in academia, I suspect.
But, it is good to have a career path and to work to follow it. A doctorate gives some skill (we hope) and practice in research, but doesn't imply that the research be done in a university setting.
Beware, however, that a lot of industrial research (not all) is product focused, not knowledge focused. That might be sub-optimal if you are forced into such a career - or not. Your choice.
Upvotes: 3 <issue_comment>username_2: >
> Is it a problem if I start my PhD with no intention of being an academic after graduation?
>
>
>
No. It is a mathematical fact that most PhD graduates will not obtain permanent research + teaching position at a university. Being clear-eyed about this is not a problem; just the opposite.
>
> At least, in my field, having a PhD will open many doors (e.g. data scientists).
>
>
>
This is absolutely true, but it's also true (at least in this field, outside of universities) that a master's plus a few years' experience is equivalent to a PhD for all practical purposes. Certainly nothing wrong with doing a PhD, but also nothing wrong with "mastering out" when the time comes, if you're not excited about the prospect of finishing. There are advantages in both directions.
>
> Therefore, I will also prepare myself for industry job market.
>
>
>
Yes, this is key. Invest in learning skills that are useful, even if not directly required for your PhD (e.g., try to learn about good programming practices, rather than terrible research code). Likewise, you can try to avoid projects that are not relevant to industry -- but it may be hard for you to judge which these are; most people underestimate the extent to which "theoretical" work can have real-world applications.
Upvotes: 7 [selected_answer]<issue_comment>username_3: **Probably not ...**
Since after all you are already accepted into the program, and it's virtually guaranteed that the list of things you need to do to graduate with a PhD does not include "find an academic job somewhere". In other words, you should be able to graduate with a PhD, as long as you are bright/motivated enough to meet the requirements.
... but your colleagues might [look at you funny](https://academia.stackexchange.com/questions/108543/what-makes-academic-failure-different-from-failure-in-another-career) if you tell them you have no intention of staying in academia. Hopefully your supervisor is not one of those people.
Upvotes: 2 <issue_comment>username_4: >
> Is it a problem if I start my PhD with no intention of being an academic after graduation?
>
>
>
I decided to go into the industry during my first year of PhD studies. I started to work in a large company in parallel (as a consultant/trainer) in a tangentially related field (that I knew well).
The PhD studies were wonderful, I also knew that I would switch in a few years and that allowed me to be relaxed with politics and feudal hierarchy. It really helped.
I actually did the switch in my last year (or 1.5 years before the defense), after discussing it with my thesis director who was supportive.
One thing I would very strongly advise against is to tell everyone around that you will be leaving just after the thesis. I am not saying you should lie but rather not discourage people to be around you - you may end up cooperating with some later on as I did.
So to answer your question: no, it is not a problem.
Upvotes: 2 <issue_comment>username_5: Not a problem. Just ask yourself why you want to do a PhD program. Personally, I found a lot of value in having some years of time to just pursue my own interests, despite me also never really seeing a continued academic path as a reasonable option. Although if you are in it for that, make sure the spot you are applying for is actually anything like that. I know a lot of people who's PhD essentially amounted to a grind along a predefined path.
Upvotes: 2
|
2022/12/24
| 1,771
| 7,205
|
<issue_start>username_0: I keep seeing my graphs all over presentations and proposals that is presented to companies for funding. There is no part that cites my name, and it looks like it is my professor who is doing this work. They will also send my ideas and work progress to another student in the group who will then make the idea their own and then get credit for it. I feel completely used and I've gotten to a point where I'm not going to share any of my progress to my supervisor and if I do (from necessity) then I'm going to put watermarks on it.<issue_comment>username_1: There are probably two different aspects to the misconduct here. One is not asking you for permission to use your creative work as you have copyright to it unless you have already given it up.
But the second aspect is certainly plagiarism if there are any "ideas" and creativity in the work. If the graphics and such are standard things like plots and such. But animations probably have creative elements - ideas.
Note that plagiarism covers ideas and their misappropriation. Copyright covers expression. Both seem to have been violated, given your description.
The professor may also be leading other students into a plagiarism trap if they appropriate your ideas.
If you have an escape available, I suggest you consider it. You at least need an explanation of why the supervisor thinks any of what you describe is appropriate in any way.
Upvotes: 1 <issue_comment>username_2: There is something totally wrong in the communication between your professor and you. Normally this should go like:
>
> Professor: Hey, you did some interesting work on XYZ. Next week I will
> give a presentation / sent a proposal / etc where your work would be a
> good fit. Is it okay if I put it in?
>
>
> Student: Yes, sure! (super happy that the work gets used)
>
>
> Professor: Great, thanks! I will make sure that you also get the credit that you deserve for it!
>
>
>
I would assume that the professor anyway has at least some right to use your results as he has most likely had the original idea and got the funding but he should have definitely asked your permission!
Upvotes: 3 <issue_comment>username_3: >
> Is it plagiarism if my PhD Supervisor uses my graphs and animations (without asking me) in funding proposals and presentations to companies?
>
>
>
Probably not. It sounds like this was joint work. Even if you feel that you did all the work and your advisor contributed nothing, it is still probably joint work if you did the work while (formally) under your advisor's supervision.
Moreover, it doesn't sound like these are super-formal documents like peer-reviewed publications or important conference talks where you need to be careful about listing all authors and citing all prior work, including your own. Rather, these are designed to showcase the lab's expertise and capabilities. It may "go without saying" that the lab has other staff and collaborators and some of the results were produced by others. Perhaps it would be nice if the professor used your name, but this is a judgment call, and I think it's unlikely you could call this plagiarism.
>
> They will also send my ideas and work progress to another student in the group who will then make the idea their own and then get credit for it.
>
>
>
It's hard to judge this case just from a sentence, but I find this to be much more worrying than the above. But we should be clear on what we mean by "make the idea their own." If they are building on your results and going in a different direction, then that sounds fine; different members of the group working together on a common area seems very appropriate. Maybe you'd prefer to work alone and not involve this other student, but this may be your advisor's call. On the other hand, if they are publishing your work as their own, then that is clearly plagiarism. But this case is all about the details.
>
> I've gotten to a point where I'm not going to share any of my progress to my supervisor
>
>
>
This is likely not an option; you cannot unilaterally rewrite the terms of your collaboration. I would recommend humility: ask your advisor for a sit-down meeting, cordially raise your concerns, and let them explain their reasoning. Be open to the possibility that they acted within their rights (e.g., in showing your work to sponsors), and/or they made some small mistakes that can be corrected (e.g., passing your work to other students). If you cannot agree, you might reach out to another trusted professor, or to the ombudsman -- but this is a semi-nuclear option that may result in you having to find a new advisor. If you really trust your advisor so little, then finding a new advisor may be for the best, but unfortunately, this is likely to disrupt your life much more than your advisor's.
Upvotes: 3 <issue_comment>username_4: I disagree with some of the advice given here. I like to use [Teddi Fishman's definition of plagiarism](https://www.bmartin.cc/pubs/09-4apcei/4apcei-Fishman.pdf) :
>
> „Plagiarism occurs when someone
>
>
> 1. uses words, ideas, or work products
> 2. attributable to another identifiable person or source
> 3. without attributing the work to the source from which it was obtained
> 4. in a situation in which there is a legitimate expectation of original authorship
> 5. in order to obtain some benefit, credit, or gain which need not be
> monetary“
>
>
>
It is obvious from the description given that the professor is mis-using his or her position of power over the PhD student. I find it entirely irrelevant that the professor obtained funding or had the idea. Indeed, the [ICMJE](https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html) makes it clear what is considered authorship. Professors should not assume that they have the rights to everything produced in their labs without having to ask for permission and acknowledge the source. We are no longer in the Middle Ages where everything the serf does is considered to belong to the master.
The problem is: what to do. I agree that contacting the RIO (Research Integrity Officer) of the school is a good step to take. Just watermarking the images does not help much, but as others have said: there is a communication breakdown here that must be addressed. It needs to be addressed professionally (i.e. without drama), sort of like the lines of "I've noticed that you are using my #whatevers# in your presentations. Is there any particular reason why you are not mentioning my name?"
As for giving the material to others for them to build on: This is another problem that is unfortunately far too common. There needs to be a culture of communicating, of group meetings where the professor says: "I've seen the work of A on X, it looks so promising. Maybe A and B can get together and talk about it?" And if the professor sees B using the material without attribution, B gets pulled aside and told why we give credit to others.
These are slippery slopes to serious misconduct, and it the responsibility of the professor to get things in order. That's why a RIO may be able to help.
Good luck!
Upvotes: 1
|
2022/12/26
| 602
| 2,515
|
<issue_start>username_0: I’m not from the US and I'm applying to US grad schools for a PhD in Math and I'd like to share my academic website containing details about six of my favourite courses that I took in my masters and around 30 or so PDFs of solutions to challenging problems, assignments, tests etc. that I did during my program. This could be seen as evidence that I made efforts to solve problems beyond the syllabus and that I'm genuinely interested in researching those areas.
I usually share a link in the contact information section of my CV but in case a CV is not requested, what's the best place to share this link? I could add a line in my SOP and include a link in the footer but that's usually discouraged.
Will also including information about my extracurricular activities and some photos be considered okay? What else could I include in this website to give further evidence of my preparedness for a PhD program? (I do not have any publications or master's thesis yet)<issue_comment>username_1: I have never seen a PhD programme which you can apply without a CV. Also, I am not sure whether your CV is the best place to showcase your efforts. The committee will definitely require a research proposal from you. This is where you state your goals and ambitions.
Explaining your research goals with evidence is never a bad idea, and in fact it's a really good structure if you highlighted your website and encouraged the committee members to go and check it for the coursework you've completed etc.
Upvotes: 2 <issue_comment>username_2: The SoP, not the CV, is the place to state your goals, both for study and thereafter. So, focus there on the area(s) that you want to study. Use past accomplishments only in brief statements (phrases) that support those goals.
I'm also surprised that you aren't asked for a CV, though it might be permitted to supply one anyway. The reason for this is that most doctoral programs in the US don't require anything beyond the bachelors so the opportunity to develop a meaningful CV is minimal for most students. Other places (Germany...) where a masters is usually required probably would want a CV or similar.
And a link to an academic website might not even be followed in a US application. It would be a bit strange to refer to it from the SoP which should be future focused, not detailing past accomplishments.
See: [How does the admissions process work for Ph.D. programs in Country X?](https://academia.stackexchange.com/q/176908/75368)
Upvotes: 2
|
2022/12/26
| 273
| 1,078
|
<issue_start>username_0: So, I noticed a small typo in my resume two weeks after submitting it. In my professional summary, I accidentally put "counselling" instead of "counseling". Are small typos overlooked or detrimental? I didn't notice anything else but I also don't want to keep looking and then cause my anxiety to increase more.
I ended up finding two typos. The end bullet said "task included" instead of "tasks included". Ugh. :(<issue_comment>username_1: No, it won't be a problem and might not even be noticed. Relax.
However, applying to only once school is probably suboptimal. Competition can be pretty fierce and most places get high quality applicants. Cast a broader net if you can.
And get a spell checker for the long term. The one here fixes lots of my typos (blush).
Upvotes: 3 <issue_comment>username_2: This is absolutely not a problem! I screen loads of applications and can assure you that a) no one will notice and b) if they do they won't care. If your application is littered with errors that's a different matter, but a typo is fine.
Upvotes: 0
|
2022/12/26
| 1,387
| 4,639
|
<issue_start>username_0: On the list of [top 250 largest patent holders worldwide](https://www.ificlaims.com/rankings-global-assets-2021.htm) (2021), I see that Chinese universities hold many more patents than other universities, even normalizing by population size or student population size. Why? I am especially interested in the difference between China and the USA.
Universities ranked by number of active patent families\* from the link above:
| Rank | Name | # active patent families | Country |
| --- | --- | --- | --- |
| 2 | Chinese Academy of Sciences CAS | 78,415 | China |
| 16 | Tsinghua University | 31,368 | China |
| 34 | Zhejiang University ZJU | 21,805 | China |
| 76 | Zhejiang University of Technology ZJUT | 11,053 | China |
| 109 | Dalian University of Technology | 7,932 | China |
| 112 | National Research Council of Science and Technology | 7,803 | South Korea |
| 119 | Nanjing University of Aeronautics and Astronautics | 7,373 | China |
| 124 | China University of Mining and Technology CUMT | 7,014 | China |
| 151 | University of California | 5,603 | USA |
| 157 | Centre National de la Recherche Scientifique CNRS | 5,370 | France |
| 218 | Capital Medical University | 3,739 | China |
---
\*An active patent family [is](https://www.ificlaims.com/rankings-help.htm):
>
> a collection of global patent filings related to a single invention. For example, if a company invents a new technology and files related patent applications in 10 countries, that represents one patent family. Further we only count patent families that are active – which contain at least one current, granted patent. This is a patent that can be enforced and that has not expired.
>
>
><issue_comment>username_1: I suspect there are two main reasons.
1. There are substantial subsidies for patent applications in China
2. Chinese patents are generally of lower quality; thus it takes less work to file them, and there is a lower bar on what to patent.
#### Patent Subsidies
It costs a lot of money to file a patent application in the US. Meanwhile, in China, [currently](http://english.www.gov.cn/statecouncil/ministries/202201/27/content_WS61f29de1c6d09c94e48a4616.html), there are monetary subsidies from various levels of government for filing patents.
[Lei and colleagues](https://funginstitute.berkeley.edu/wp-content/uploads/2013/12/patent_subsidy_Zhen.pdf) wrote about this issue in 2013:
>
> Other observers, however, believe that the patent
> boom is largely due to various patent subsidy policies implemented by local governments to meet and/or exceed the patenting targets specified by the government.³
>
>
>
They go on to note:
>
> it’s likely that the increase in the number of applications is due to
> patentees broke up their patents to get more applications (and thus more subsidies).
>
>
>
#### Patent Quality
The other issue is patent quality. In my personal experience with university patent offices in the US, the professionals that work there are very interested in if your invention is of significant enough quality and commercial value before filing for a patent--after all, it costs a lot of money.
Because of the subsidies, universities in China may be less interested in patent quality. The bar is lower, allowing more patent applications.
[Hefa and Zhenxing wrote about this in 2014](https://www.law.berkeley.edu/files/Song_Hefa_IPSC_paper_2014.pdf) :
>
> Based on the analysis above, it is discovered that of all the four aspects of patent quality, China falls behind in quality for invention, document quality, and quality for commercialization... The lagging behind quality for commercialization shows the reality that Chinese patents are less commercially valuable. The average royalty and license fee is
> significantly lower compared with developed countries.
>
>
>
In conclusion, there are subsidies from the Chinese government that incentivize more patent applications. Further, the quality of the resulting patents is lower.
[The Chinese government seems to agree](http://english.www.gov.cn/statecouncil/ministries/202201/27/content_WS61f29de1c6d09c94e48a4616.html):
>
> China plans to stop all patent subsidies by 2025 to help shift its intellectual property (IP) focus from quantity to quality.
>
>
>
Upvotes: 5 <issue_comment>username_2: My two cents:
* It is part of their performance metric. In other words, having patents count toward their promotion or continuation of their employment.
* Many academics own side businesses; there is a huge incentive to do so given their low salary. Hence, many seek to commercialize their research outputs.
Upvotes: 2
|
2022/12/27
| 3,819
| 16,745
|
<issue_start>username_0: Unlike the supervision style in my department (CS), I put an enormous effort into supervising my master's students. I meet with each of them twice a week and spend around two hours each week explaining what to do and how to do it.
From the first step, I propose a topic with a potential methodology and ask the student to follow it and give him or her precise instructions on how to proceed. After they successfully complete their theses, I usually push the student to write a paper but never worked out as the students find it meaningless to write papers if they move to industry. Instead, I write papers out of these theses/work and submit them to journals by including the respective student as a co-author. Since I am the one writing the paper, I put myself as the first author and primary contributor. For some journals, it is possible to specify the contribution of every author but for most of them, it is not possible.
I am wondering whether claiming that I am the principal contributor is fair/ethical, given that the student is implementing my ideas and I am who writes the papers. Being the principal contributor is important for some grant applications.
**EDIT**
After reading some comments, I think I did not make myself clear. My question is not about whether I should be the first author or not because that what is should be done as the one putting the effort to write the paper. *In summary, there is not disagreement between me and the students. They are actually happy that they can have a paper without putting in additional effort.* My question is rather about whether should I work on the paper and submit it or not (If not, the paper will never see the light of day). Note that this is done only after the student decline to write the paper and gives his/her consent (e.g. exchanging email after the thesis is completed so that I do not affect his/her decision) that I take the lead.<issue_comment>username_1: To be honest, I would always let the student be listed as primary/first author, even in cases where it is pretty clear that they weren't. That is a matter of opinion, I guess, but I viewed (retired now) it an important part of my job to boost student careers.
The precise instructions you give are exactly your main job. Letting them off the hook on writing for publication was/is probably a mistake for any student wanting to go farther in their education, though not all masters students do.
In fact, I'd go further and not be listed as author at all except in clear collaborative work with some equality of participation. I was listed as co-author on none of my student theses or subsequent publications either at masters or doctoral level. Let the glory be theirs. An acknowledgement is enough, sometimes detailed.
I don't know how common my views are, of course. I will note that it is pretty common for doctoral advisors to suggest/provide problems and advice and feedback along the way. I suspect that most doctoral advisors could do the work themselves if they had the time.
As to the specific ethical question, I think it is marginal at best.
Upvotes: 5 <issue_comment>username_2: My remarks here merely add to the similar remarks from @username_1.
I too would prefer to have the student as the primary author because I consider my promotion of my students to be one of the most important things that I can do for them, whether they ultimately end up in industry, academia or somewhere else.
I also recognize that the sense of *urgency* about publication is frequently driven more by my priorities than by theirs. It isn't that students don't care about publishing, but they often have other things to consider: completing their applications for multiple graduate schools, finding a job to pay of their student debt, or finding new accommodation for example. Publication might have a more immediate pay off for me than for the student; I can chalk up publications against my employment KPIs!
So yes, I'd write up the paper, put the student as first author, and feel good about the whole deal.
Upvotes: 4 <issue_comment>username_3: This *would* be completely ethical… if only you discussed the authorship order with your students and made sure your suggested order is acceptable to them before submitting the papers. Explain to them that you are planning to adapt parts of their thesis research into a paper, that this requires a lot of work, and that therefore you think whoever does that work should be listed as the primary author. Give them a chance to do the work themselves and get that primary authorship credit (as you seem to be doing already). If they are uninterested, get them to confirm that they agree you will be the primary author. (And if they disagree to that, then have a discussion about it and make sure you and the student reach some agreement on this issue before proceeding with any further work on the project.)
In other words, the ethical issue here isn’t your thinking you should be the primary author, but the fact that you think it’s okay to submit a paper without having an explicit agreement from all coauthors about the type of authorship credit they would be getting. This type of agreement is necessary with any collaborator regardless of whether they are your students.
**Edit following OP’s clarification of the question:** you say that you do actually discuss with your students the plan of you turning their thesis results into a publication, and get their consent to proceed (presumably also with the author order agreed to by everyone, although you didn’t state that explicitly). In that case, as I said in my original answer, there is no ethical issue — your contribution seems within the range that warrants first authorship, and your coauthors agreeing to you being first author shows they clearly believe so as well.
Upvotes: 7 [selected_answer]<issue_comment>username_4: In some research areas in some journals there is no primary author. Authors go in alphabetical order by "family" name. I prefer that, but it's just my bias.
About these rewrites. It seems the student has been doing ample work to require their name on the paper. So they should be quite closely involved in the rewrites. Whether editorial or technical, their name is going on the paper. So they should have something to say about what goes in the paper. Rewriting without involving them, even if they wind up approving of the paper, it's at least inconsiderate.
Also, rewrites were for me a way to learn a lot about what makes a well written paper. From mundane details like acceptable style for a journal, to fundamental technical details, I learned a lot having my prof drag me through the process.
I'd be surprised for a prof to do much of the mundane work themself. Markup on a draft, sure. And meetings to discuss things, especially if there are technical issues. But I'd really expect the student to do the "pounding a keyboard" type work. Again, so they can get the learning benefit. And also so that the prof can spend more time on other work.
Upvotes: 3 <issue_comment>username_5: Your department seems dysfunctional; it is not uncommon. You should be considering a broader scope of the issue.
That is, students do not really care about research, but you do. Their goal is to get their diploma as efficiently as possible (which means not putting any extra effort in). Given the university is happy to give those away without publications - which is also perfectly sensible - there is absolutely no incentive for them to do anything about these papers *after* they have completed their thesis. Further, if the thesis committee sets the bar far below that of a decent publication, there is practically nothing you can do with respect to students actually writing papers. Ergo, as username_3 points out, you should just sit down with your students and agree on authorship in advance.
Also, if the work your students do is essentially limited to working in the lab, running experiments, getting familiar with frameworks and tools of the trade, you may be well-entitled to the first authorship by any reasonable standard regardless. Still, you have to develop a shared understanding of contributions and publication strategy, because it is, essentially, just your whim right now; students do not have any skin in this game.
This situation - again, not uncommon, especially in weaker departments/universities, - has other implications for your career. If higher up the management chain there is motivation to rework the existing system, make students more competitive, including academically, build a scientific school, you could leverage it to slowly (!) shift the evaluation standards and normalize paper-writing-related assignments. If not, your efforts there would be futile. And if so, consider how much time/effort you should keep putting in into supervision and what to focus on: your students might need less research-focused subset of skills that you are giving them, you are likely doing less-than-optimal also when it comes to the project scale and continuity.
Bear in mind that categorical imperative or, more practically, "fake it until you make it", is often not the most optimal approach.
*In good departments, many students write papers*
*In good departments, some students continue working in academia*
*In good departments, professors are productive*
All of the above is not a function of a single person's work. There is an entire system supporting it, and attempting to change it alone, without a substantial administrative power, is doomed to fail. I know it all too well; you can change things when you are a dean or a research rock star, but at the earlier stages of your career, pick your battles wisely. Limit the scope of your ambition to what you can realistically achieve at the time. Maybe your university/department is a wrong place to push MSc students towards novel scientific contributions altogether, and its social role currently is limited to pumping out new hires for the ever-hungry industry. I have taught in one such place, and it was miserable.
Upvotes: 2 <issue_comment>username_6: I can just speak for my case. But I supported while writing my Ph.D. thesis bachelor and master students as well. Same with you, I put a lot of time, also my free time, into supporting students. So I planned their experiments with them, helped them evaluate their data, and so on.
Each of them enjoyed staying with me for their thesis. In the end, I used their experimental data to write papers. The thing is, none of their thesis has enough data for a single paper, and also not the quality of references and so on to be published alone. So I combined most of the time 2 bachelor's thesis or a bachelor's and master's thesis to write a research paper. Also because I combined the experimental Data and evaluated it with a new perspective everything I wrote was new, except for some information from the students. But this information mostly is known before and published in papers. I am so always the first author and each of the students is my co-author.
Nevertheless how small their contribution was. In my institute, this is also a common way. First authorship is given to whom the writing was of the paper. So and this is me. Also most of the time the figures in the thesis of a bachelor's or master's need to be equalized. So I make all figures new. It also takes me most of the time 6-8 weeks to write the manuscript.
Upvotes: 3 <issue_comment>username_7: The key here is to be explicit with your students when they start. In my first meeting with students I now routinely explain that 1) I want them to publish, 2) if we think we can publish then either they, or me, or one of my post docs will need to turn the thesis into a paper, and 4) whoever does that will usually be first author. If we agree on that, I then put it in an email to them, and it saves hassle in the long run.
Upvotes: 4 <issue_comment>username_8: Often the principle idea comes from the supervisor and the steps to achieve the goal are small, also helped by the supervisor. If the student has no big idea of its own, I think it is okay to be the first author.
To me, it is very important to talk to the student. Offer them writing the paper with them as the first author. If they are not interested, offer them to write the paper and list them as a co-author. If they don't agree, you are not allowed to list them as a co-author!
Upvotes: 2 <issue_comment>username_9: To answer the question, I think it is ethical in individual cases described.
But there is a growing cumulative problem when you find this happening again and again.
*Unlike the supervision style in my department (CS), I put an enormous effort into supervising my master's students. I meet with each of them twice a week and spend around two hours each week explaining what to do and how to do it.
From the first step, I propose a topic with a potential methodology and ask the student to follow it and give him or her precise instructions on how to proceed.*
**You are doing far too much work for your students.**
While I appreciate the importance of research management and publications for your own academic career, they are graduate students now and they really have to be self-motivated enough to make the most of their master's program. You should **discuss** methodologies with them - **not dictate** a particular methodology. Neither should you be providing them with bi-weekly spoon-feeding sessions: let them come to you (or you check up on them) as and when they need to with one review session say per 2 weeks approx.
Is it not astonishing that if you can extract a paper from the work of an MS student that that student cannot be motivated enough to put it all together themselves ? Industrial organizations, government labs and all sorts of other employers will always be more impressed when seeing a CV with publications (even one) on it than one without any.
Naturally, you will feel that having put your own name to so many papers involving your MS students' work that you may be seen by peers as a lazy researcher - or even a scavenger - rather than someone implementing their own ideas. And given your time expenditure this must feel so unfair.
I feel you need a whole new approach to your MS supervision so that you don't just work hard at it - but **supervise smart**.
* Minimize "straight through" MS candidates, i.e. students who had been your undergrads the previous year. Students in this category usually see PG work as an extended version of a final year undergraduate study, which it is not. Select mostly undergrads from other universities, students with ~ 2 years work experience from wherever and only highly-motivated straight-through local students with no ulterior motives, e.g. girl/boyfriend finishing their course at your university.
* Select only those candidates who *demonstrate understanding and acceptance the importance of publications* to their future career.
* See your HoD or research dean about organizing classes in common postgraduate skills like literature searches, note filing systems, experiment planning, statistical analysis, presentation practice, paper writing, etc. All students must take this and show evidence in their engagements with you of having done so. Don't you go around hitching up their academic trousers like a departmental Mommy.
* Remember that only MS work done by well-motivated students is going to be of benefit to your own career: don't let students see from your extravagant commitment that you in a sense "need them" however sloppy their attitude may be and however easy it is for them to get well-paid jobs in the industry. Let them all know that you cut them loose if they don't maintain a good work ethic.
* Check out the university student counselling service to see about real (not professed but untested) supports available for postgraduate students. While you will be one of the first to notice something wrong with your PGs, you must be the last to get involved in their personal issues: that's one for the pros.
* Realize that MS students may be physically mature (and talk big!) but they are still in their early 20s and still needing assurance from adults now and again. As a senior professional colleague it's okay to provide this now and again when it's merited but do so detachedly and rationally: no hugs just a strong eye-to-eye, a tap on the arm and a "we all have awkward days / take a break / well done / etc".
* Research is not everything in academia. Education is what it's really all supposed to be about. Get to know other young academics in other departments and listen to their ideas - however loco - on education, research and supervision.
Upvotes: 2
|
2022/12/27
| 706
| 2,584
|
<issue_start>username_0: My university provides access to [Turnitin](https://www.turnitin.com/) for every student. I have prepared a manuscript ready for submission to a double-blind review conference for September 2023. Is it OK to use the plagiarism checker to assess the similarity index of my manuscript before submission? Would that be judged as plagiarised work since the review is double-blind?<issue_comment>username_1: Turnitin is not a tool for checking plagiarism, but for similarity. They are not the same. High similarity does not imply plagiarism. Low similarity does not imply no plagiarism.
Also, if you honestly did all the work yourself, there isn’t any reason to use Turnitin.
Upvotes: 4 <issue_comment>username_2: >
> Is it OK to use the plagiarism checker to assess the similarity index of my manuscript before submission?
>
>
>
You can do whatever you want with texts you have written yourself. So yes, you may use Turnitin, no problem at all. As others have pointed out, you should know how to interpret the result, of course, and think about the difference between similarity and plagiarism.
>
> Would that be judged as plagiarised work since the review is double-blind?
>
>
>
I do not get the logic behind this question. Using a tool such as Turnitin does not automatically imply that you plagiarised. And the type of the review process has nothing to do with it, either.
Upvotes: 3 <issue_comment>username_3: As per [Wikipedia](https://en.wikipedia.org/wiki/Plagiarism):
>
> Plagiarism is the fraudulent representation of another person's ... as one's own original work.
>
>
>
So no, "assessing the similarity index of your manuscript" is most certainly not plagiarism.
However, according to <https://en.wikipedia.org/wiki/Turnitin>:
>
> The essays submitted by students are stored in a database used to check for plagiarism
>
>
>
I'm not sure what exactly Turnitin does with the stored materials, e.g. who can access them. Since your work will be stored there, it can potentially violate the copyright required by your conference. I would check with the conference whether using Turnitin is allowed (and, as others already mentioned, I wouldn't use any such tool in the first place).
Upvotes: 2 <issue_comment>username_4: No, it's probably not OK: the [Turnitin End User Licence Agreement](https://www.turnitin.com/agreement.asp) says
>
> You agree to use the Site and Services only as follows:... if You are
> a student, only in connection with a registered class in which You are
> enrolled.
>
>
>
Upvotes: 6 [selected_answer]
|
2022/12/27
| 1,284
| 5,523
|
<issue_start>username_0: I am in an uncomfortable situation: Last week, shortly prior to Christmas, I submitted my (me as first-author) manuscript. The journal replied on Dec 23rd, saying that the manuscript needs major revisions (and included their suggestions).
The time period for the revision is stated as being 7 days, starting with Dec 24. Obviously, as it was Christmas, I have received hardly any answers from my co-authors, including from my supervisor who has the data that needs to be revised. I have already asked the journal for an extension, but they have not replied. What happens if I miss the deadline?<issue_comment>username_1: Unless it is for a special issue with a firm deadline (or a conference), probably nothing happens, though there will likely be a delay in publishing so that it can be rescheduled.
It seems a bit unreasonable, but do the best you can and either resubmit it or give the editor a firm-ish date on which you can.
It is unlikely that you will get a rejection for that alone, except in the special issue or conference situation.
One reason for a short deadline might be that someone, say a reviewer, on their end already missed a deadline while the editor wants to fill and issue and so they are squeezing you. Alas.
And, as the comments imply, there may be something fishy with this journal. Assure yourself that it is reputable.
But, even for a reputable publisher the deadlines for completion can be very firm since the paper can't be pushed to a future issue. It might be possible, however, for them to remove it from the "special" and schedule it for a future release. That might be the best you can hope for.
Upvotes: 2 <issue_comment>username_2: I have had this before (a ridiculously tight deadline) and sent an email to the editor to ask for a new date. He told me that the journal deadlines were only designed to encourage authors and everyone in the journal basically ignored them anyway. You can hope your editor takes the same view!
But to be sure, why not send an email saying something like "I am so sorry I won't be able to meet that deadline because [reasons]. However we can get the paper back to you on [date]. Please let me know if that will be a problem." They won't reply, and you can use their silence as evidence that they accepted the new date if (and they probably won't) they complain later on.
Upvotes: 2 <issue_comment>username_3: Ultimately the answer to the title question is **You don't. If it's not ready, it's not ready**. You can't submit a major revision that doesn't address the reviewer's comments. Therefore, you will simply have to miss the deadline and deal with the consequences.
However, don't assume the worst. The journal wants to publish your paper too, or they'd have given a reject decision. For most journals, what I wrote in [this answer](https://academia.stackexchange.com/a/101911) applies (I used to work in academic publishing so I know this firsthand):
>
> For context, journals usually give these deadlines because they want to keep the author thinking about the manuscript, and are hoping for a timely revision. The last line about your paper possibly being withdrawn is because there are authors who, after a revise decision, decide not to revise their paper but do not inform the journal. If you exceed the deadline your submission gets marked as inactive. When the journal decides to do spring cleaning and remove all the inactive submissions, that’s when your submission might be withdrawn.
>
>
>
A caveat to this is that if your paper is part of a special issue with a fixed publication deadline, then they might have no choice but to publish without your paper. Since in a comment you wrote "it was already the extended deadline when we submitted", you can't really expect them to hold up the special issue even more for your paper.
One more thing. From your description of your paper's review process, it sounds like you are dealing with an MDPI journal. MDPI is an unconventional publisher - their processes are all hyper-accelerated. Their reviews are fast, their decision time is fast, and the time they give for revisions is also fast. As far as I'm aware, this is because they treat the time taken from submission to publication as a key performance metric (comparatively other publishers won't even measure this metric).
If you are dealing with an MDPI journal, then because their processes are atypical & I don't have an internal view of them, I can't tell what will happen. I would nonetheless guess that they will probably accommodate you, because again they want to publish your paper or they'd have given a reject decision. Also, MDPI's special issues are atypical in that they aren't extra issues of a journal (e.g. a journal that publishes 8 issues a year could have a "special issue" and have 9 issues that year). Instead they appear to be collections of papers on a selected topic with an invited guest editor. If this picture is right, then there is no need for a publication deadline. From their website it indeed looks like there is no publication deadline, only a submission deadline. If so, then they should be able to accommodate you.
A final caveat is that I can see them changing the status to "reject & resubmit", which is effectively the same decision as "revise", it just lets them game their time-to-publication metric. I don't say that they will, but I can see it happening (I can also see it not happening because technically it is past the submission deadline).
Upvotes: 2
|
2022/12/27
| 1,262
| 5,304
|
<issue_start>username_0: I am an independent researcher in mathematics. I wrote a research paper and I intend to publish it.
It is my first math research paper, I have no prior experience in this.
I submitted my paper to multiple journals, but they found out about the multiple submissions and replied negatively.
I tried to list my article on arXiv, but they require endorsement, which is hard to get.
These days I have been sending about 40 emails per day to either universities professors and to authors on arXiv and ask for the endorsement.
So far zero success.
My article is about number theory. First year at uni math knowledge should be enough to understand.
Can I please have an advice?
Publishing an article seems to be a real nightmare.<issue_comment>username_1: >
> It is my first math research paper, I have no prior experience in this.
>
>
>
The purpose of PhD programs is to train people to do research, including the process of publishing papers. If you're an independent researcher with no prior training, you should not expect this to be easy and likely not even possible. Imagine if instead your post was "I'm an independent carpenter with no prior experience. I have some wood and I'd like to build a house; I put in some nails but no one wants to live there even though I've been handing out pictures of my wood pile to people on the city bus. Please help." I don't write this to be rude, I write this to help explain that doing and publishing research is a trained profession, and no one without training should expect they can do it without training, and that this is *totally okay and does not mean you don't have potential to publish good research* or talent any more than someone with interest and talent in carpentry is not expected to build houses from scratch with just that interest and talent.
>
> I submitted my paper to multiple journals, but they found out about the multiple submissions and replied negatively.
>
>
>
Yes, they replied negatively because this is extremely rude behavior - reviewing a manuscript in academia takes volunteer time. You've asked a bunch of different volunteers to simultaneously spend their valuable time all on your manuscript rather than the many others they've received to review. An advisor would have helped you avoid this.
>
> I tried to list my article on arXiv, but they require endorsement, which is hard to get.
>
>
>
ArXiv requires endorsement because they're trying to limit the amount of junk that gets posted on their site. An advisor could help you with both the endorsement part and with knowing whether your paper is useful or junk.
>
> These days I have been sending about 40 emails per day to either universities professors and to authors on arXiv and ask for the endorsement.
>
>
>
This is also rude. Don't do this.
>
> My article is about number theory. First year at uni math knowledge should be enough to understand.
>
>
>
If your article is understandable with just first year university math knowledge, it might be a good demonstration of what you understand about first year university math, but it is **almost certainly not useful research-level mathematics**, the kind that gets published in journals. An advisor would help you understand what sorts of papers are useful to publish and whether yours fits this category.
>
> Can I please have an advice?
>
>
>
You need a research advisor/mentor. One way to obtain such a mentor is to apply for graduate programs - PhD programs and many masters programs provide mentorship in research; for PhD programs it's the main point. It may be possible to find someone to mentor you even without formally enrolling in a program, but this is also difficult - possibly more difficult than applying the normal way (why should a professor spend time mentoring someone who isn't admitted to their program, when they have other students who *are* admitted that also have questions?). You're more likely to convince someone to be your mentor if they appreciate your potential for research rather than this paper in particular.
Upvotes: 6 [selected_answer]<issue_comment>username_2: First, congratulations on your result!
One of the rules is that you can't submit a paper to more than one place at the same time. Also, there are different kinds of mathematical journals. If you submit a paper to a journal, you want to make sure your paper is suitable to that journal.
The reason your e-mails are not getting responses is because mathematicians do in fact receive e-mails from people who are almost certainly writing faulty proofs, so their "default" position is to not respond to your e-mail.
I recommend that you find an established and trusted mathematician who agrees to take the time to understand your result. Is there a local college or university you can go to? Going to a secretary at a physical location would probably be a better bet than e-mailing. I think a liberal arts college (if you're in the US) would likely be more welcoming than a research university.
Do you know anyone who knows the mathematical background to determine correctness? Maybe an undergraduate math major?
Wait a minute: was your paper by any chance about the Riemann hypothesis? I got an e-mail from someone about this a couple days ago.
Upvotes: 2
|
2022/12/28
| 1,520
| 6,363
|
<issue_start>username_0: I started an engineering PhD program this year with an excellent advisor that I get along well with. I just finished my first semester where students are mostly required to focus on coursework with minimal research work.
When I first started, I was asked to choose either project A or B for my PhD. Both projects are funded, but project B is actually a joint project and is not fully the area of expertise of my advisor and requires co-advising from another professor who is the main investigator of the project. I preferred project B to project A so I chose it.
However, as the semester started, I emailed the other professor and he responded positively, but he then stopped responding completely when I asked to meet him. I tried to send him multiple friendly follow-up emails throughout the semester, but there wasn't any response at all. After I met his graduate students and talked to a couple of alumni, I got the hint that I will have a hard time working for him. Is it ok to change the project after choosing one?
How can I bring it up to my advisor that I want to change my project from B to A without burning bridges? I am afraid that I will disappoint my advisor and be perceived as flaky and not serious.<issue_comment>username_1: The way to handle this is to just do it, preferably with a sit-down meeting in which you can discuss options and your reservations. If your advisor agrees to a switch then let them deal with the other professor. There is no need to "burn bridges" with anyone.
There is little worse than setting out on the wrong path, knowing that it is the wrong path. Let your relationship with your advisor carry you over the murky bits, both in this and in the research.
Yes, it is OK to switch. Some of us actually switched universities in order to find a happy home. Face to face, though. A conversation.
Upvotes: 3 <issue_comment>username_2: It's much better to have this kind of conversation with your advisor as soon as you realize the issue. I remember as a student being reluctant to discuss this kind of issue with my advisor, but now as an advisor, I'd much rather a student bring it up early.
Let's consider the case in which you wait. Will you make as much progress on project B if it's not your expertise and you need help from the other advisor? If it's difficult to meet with them (e.g., they're really busy) you will probably have trouble learning the techniques yourself.
Now, it's possible that your advisor might want you to stick on project B. Maybe they need the funding, or they need the collaboration with the other group. In that case, I think it's on your advisor to help get the project off to a good start, either by arranging a meeting with the other professor, or finding out which member of that group can help get you started.
Your concern about your advisor's perception of you is, I think, unfounded. Coming with valid concerns about getting a project off to a successful start is (IMHO) a very good sign in a student.
Upvotes: 4 [selected_answer]<issue_comment>username_3: First, a couple preliminaries/considerations that don't directly answer your question
1. Have you tried anything other than emails and gossip?
* Not trying to be harsh, just some people respond much better face-to-face.
* Are there office hours for the professor?
2. Is there money involved?
* Has the funding explicitly/implicitly already paid for your tuition?
* Has the funding paid for anything else?
* Will it go away if you switch, or at least require significant behind the scenes money shifting?
3. Is project A still available?
* If I was a professor, and had a student choose B, then I'd look for student for A.
* Research doesn't stand still, and often there's a limited time window.
Given these, resolve Question 1:
* if( !(Tried anything other than email) )
* then: Attempt to have a meeting with the 2nd professor anyway, even without an email response. Or trying calling them and leaving a voice message. Or go to their building and try to arrange an appointment (maybe a receptionist?). Or, show up at the office hours and introduce yourself. Talk about your concerns politely, yet honestly, especially if you feel there's a need for significant research guidance.
Resolve Question 2
* if( know money is involved )
* then: Show far more care about changing. If you do meet with your Advisor or 2nd Professor then its definitely something to discuss. It's not so much an issue of flakiness, as whether your education will suddenly stop having funding.
* elsif( don't know )
* then: Find out. If your Advisor is easier to find and talk with, then its a point that probably needs to be comprehended.
* elseif( money is not involved )
* then: Far less issue with changing research topics.
Question 3 resolution should be fairly obvious
* if( project A was assigned )
* then: You're probably out-of-luck. Especially if student on project A has already started working and performing research.
* else: Far less issue with changing research topics.
Personally, I think I'm slightly less positive on switching than the other answers (especially if you've only tried email). You're in a profession where you're expected to track down answers, and pursue (somewhat) independent lines of study. I'd expect that person to at least meet the other professor. That said, understand the issue as I had a PhD experience with very distant advisors, and many undergrads => PhD's go from lots of advice => "nothing".
If you try meeting 2nd professor (even without an email confirmation) and it still does not work, and there's limited risk for education support (money), and Project A is still available, then its probably acceptable to request a project change.
Meet with your Advisor. Get their opinion on the situation, explaining what you've tried and your concerns. In their position, I would probably pick up the phone and try calling 2nd professor. If you're set on switching, try to make the switch as painless as it can be, and offer to do whatever legwork is necessary. IE: Don't just dump it on the prof. and expect them to deal with it. Personally, I find that passive aggressive. Also, I cannot stress this enough - try more than just email. Do you want to go to your Advisor, and they'll ask "what you have tried?" and your response is "I sent some emails."
Upvotes: 2
|
2022/12/28
| 844
| 3,747
|
<issue_start>username_0: I wanted to ask my teacher for a letter of recommendation, and wanted to contact her via a phone call. Upon asking her what time she would be free (I didn't mention the purpose for calling), she answered that she was sick and out of town as well. She told me that she would be free after around 10-15 days. However that is pretty late since she will also need some time to write the recommendation letter. The admissions team is going to be reviewing the materials beginning from January.
What should I do? Is it rude to bring up the matter of my recommendation letter when the teacher is sick and wait for 10-15 days or should I mail her regardless and at least inform her about it?<issue_comment>username_1: I *don't think anyone else can answer that for you* ... But on the other hand, I ***do*** think that there is a way for you to think about the problem more clearly, that might help.
First, there isn't really anything special here, in terms of how you deal with the other person, about that person being your teacher. They are just another human being. Ask yourself what you would do in similar circumstances if, say,
* you needed/wanted something from someone who was your friend, and
* your friend said they were sick.
Please don't get me wrong. I'm *not* implying that you shouldn't contact your teacher. I'm suggesting that you weigh up your concerns about the other person against your own needs, ... just as one has to do in so much of life.
You could then consider doing any of the following:
* Not contact your teacher, until 10 days time ... effectively limiting the potential time in which the teacher *might* be able to respond to your request, but accepting that she is completely unwilling to reconsider your request in the short term.
* Contact the teacher again with a simple repetition of your request ... which is likely to seem rude
* Write (email) your teacher, and open with a simple statement that you understand that she is unwell, and that you are probably bothering her. But also clearly explain your own circumstances and concerns. That doesn't force the other person to do anything. Instead, having made your choice, it puts the ball in their court, and asks them to weigh up their needs against your stated needs, and leaves them to make a decision.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Caveat: Peoples' attitudes to preferred modes of communication vary widely.
You approached this in the wrong way. Writing a letter of recommendation is a rather standard task. I would consider it to be best practise to request a letter via an email containing the most pertinent information, and including the offer to speak in person/on the phone/via Zoom if the letter writer deems this convenient. Ideally, such a request comes about 4 weeks prior to the deadline.
If someone requests a phone call without disclosing what it is about, I personally would fear something messy and complicated coming up. If there is no urgency communicated at the same time, being unwell, travelling or Christmas would all individually be sufficient reasons to postpone it.
Since the deadline for the letters does seem to be after the 10-15 day period the lecturer said they were unavailable, here is what I would do for the way forward: Respond to their email now, wishing them a "get well soon" and a good start to the new year (if they are operating on a calendar where that is relevant), informing them that the requested phone call was just about a letter of recommendation, and include the relevant information for that.
The great thing about email as a communication medium is that it is asynchronous: The lecturer can decide when they feel up to reading their emails.
Upvotes: 2
|
2022/12/28
| 381
| 1,646
|
<issue_start>username_0: I am an established lecturer, with my own niche area of research. My area should be of interest to lots of Master of Arts (MA) courses in my field—I get good student feedback and it's an interesting topic.
What are good ways of letting organisers of Masters courses in my field know that I am able to provide one-off guest lectures on their courses?<issue_comment>username_1: I can think of two ways, but both are long term solutions.
First, you could work to achieve a large (large) circle of collaboration in your field, so that people think of you and, perhaps, invite you for a visit. You might be asked to talk to students as part of a collaborative visit. Some of those might be paid, but not likely very many. Conferences are good for finding people with similar interests. And, with collaborators, you can think in terms of mutual invitations to speak, or even exchanges.
Second, you could work to become so dominant in your field's knowledge base that people you don't know think of you. That could be in academic publications or in the popular literature. <NAME> got a lot of such invitations, for example, as did <NAME>ős. Everyone knows about your accomplishments and wants to hear what you have to say. <NAME> was another example.
Upvotes: 1 <issue_comment>username_2: As a starting point, I would recommend you write directly to the lecturers in the relevant courses. They are the ones that will make the content decisions for their own courses. Some may see your offer as a good way to get some valuable content and presentation in their courses while easing their own workload.
Upvotes: 2
|
2022/12/28
| 1,034
| 3,610
|
<issue_start>username_0: I (actually my cousin) just received an offer letter from [SJSM-Anguilla](https://www.sjsm.org/anguilla-school-student-life/) to study [MD](https://www.sjsm.org/usmle-step-1-guarantee/). They are saying that they are accredited in the Carribbean, and after completing the program, the students can do residency and practice in the USA.
However, my cousin doesn't have any science background. He never studied math, chemistry, physics, or biology in his life. The address of the admission office is from the USA. Another thing suspicious is, the offer letter doesn't have any serial/identification number.
How to assess whether this is legit?
[](https://i.stack.imgur.com/HfNFx.png)<issue_comment>username_1: Yes, this is a legitimate offer from a real school.
I also took a quick look at online reviews. There are too many negative reviews such as *"don't go there"* or *"avoid this place at all cost"* that will surely make you believe that this school, SJSM, is a real, legitimate institution. There are also reviews with the interesting title *"SJSM - The Truth*" that I would like you to read.
Also, they have recently been [fined](https://www.ftc.gov/news-events/news/press-releases/2022/04/federal-trade-commission-takes-action-against-profit-medical-school-using-deceptive-marketing-lure) 1.2 million USD by the Federal Trade Commission for deceptive marketing.
>
> “<NAME> lured students by lying about their chances of success,” said <NAME>, Director of the FTC’s Bureau of Consumer Protection.
>
>
>
>
> Misrepresented School’s Medical License Exam Pass Rate: USMLE Step 1 Pass Rate was advertised to be 96.77%. The true rate is 35%
>
>
>
>
> Misrepresented Residency Match Rate: The advertised match rate is 85-95%. However, in reality, the match rate for Saint James students is lower than touted, and lower than that reported by U.S. medical schools. Since 2018, the defendants’ average match rate has been 63%."
>
>
>
So, what you should think about now is whether or not it is worth going there.
Upvotes: 4 <issue_comment>username_2: Saint James School of Medicine seems legit. It is a private, for-profit medical school. They appear on [Wikipedia](https://en.wikipedia.org/wiki/List_of_medical_schools_in_the_Caribbean)'s list of medical schools in the Caribbean, and they are ECFMG eligible, meaning that their graduates may be considered for residencies in the US (and, it seems, Canada as well). The letter also seems legit, in that they are directing you to send payment to the school, not to some random Venmo account.
The catch is that actually getting a residency in the US can be [quite difficult](https://www.nytimes.com/2021/06/29/health/caribbean-medical-school.html). In particular, I notice they claim that 89% of their students passed the USMLE (licensing examination) on the first try, but Wikipedia reports that the real number is closer to 35%. The discrepancy is explained by the fine print -- the 89% only counts students who got 195 or higher on the Comprehensive Basic Science Exam. If your cousin has never taken science "in his life," then it's unlikely he will be in this group.
In short: it looks like your cousin has a pathway toward earning a "real" MD. But, becoming a board-certified, licensed doctor requires more than an MD, and that might be a real challenge. This could lead to a difficult situation where your cousin has spent a fortune on tuition and fees but cannot see much return on that investment.
Upvotes: 4 [selected_answer]
|
2022/12/28
| 998
| 3,704
|
<issue_start>username_0: I am applying for PhD programs and some of them ask for a teaching statement for TA positions. However, as a graduate student, my only experience related to teaching is working as a grader of a graduate level class. I only worked closely with the professor and never communicated with any student.
I've read some teaching statement examples, but it seems nearly all of them are for people applying for faculty positions with or without teaching experience. My question is: should I follow this kind of examples talking about teaching philosophy/methods? Or is there another way for graduate students to write the statement? I think the department know that the possibility of graduate students having teaching experience is not high. So I am worrying that if I imitate the faculty way of writing may look very naive and doesn't work.
Thank you!<issue_comment>username_1: Yes, this is a legitimate offer from a real school.
I also took a quick look at online reviews. There are too many negative reviews such as *"don't go there"* or *"avoid this place at all cost"* that will surely make you believe that this school, SJSM, is a real, legitimate institution. There are also reviews with the interesting title *"SJSM - The Truth*" that I would like you to read.
Also, they have recently been [fined](https://www.ftc.gov/news-events/news/press-releases/2022/04/federal-trade-commission-takes-action-against-profit-medical-school-using-deceptive-marketing-lure) 1.2 million USD by the Federal Trade Commission for deceptive marketing.
>
> “<NAME> lured students by lying about their chances of success,” said <NAME>, Director of the FTC’s Bureau of Consumer Protection.
>
>
>
>
> Misrepresented School’s Medical License Exam Pass Rate: USMLE Step 1 Pass Rate was advertised to be 96.77%. The true rate is 35%
>
>
>
>
> Misrepresented Residency Match Rate: The advertised match rate is 85-95%. However, in reality, the match rate for Saint James students is lower than touted, and lower than that reported by U.S. medical schools. Since 2018, the defendants’ average match rate has been 63%."
>
>
>
So, what you should think about now is whether or not it is worth going there.
Upvotes: 4 <issue_comment>username_2: Saint James School of Medicine seems legit. It is a private, for-profit medical school. They appear on [Wikipedia](https://en.wikipedia.org/wiki/List_of_medical_schools_in_the_Caribbean)'s list of medical schools in the Caribbean, and they are ECFMG eligible, meaning that their graduates may be considered for residencies in the US (and, it seems, Canada as well). The letter also seems legit, in that they are directing you to send payment to the school, not to some random Venmo account.
The catch is that actually getting a residency in the US can be [quite difficult](https://www.nytimes.com/2021/06/29/health/caribbean-medical-school.html). In particular, I notice they claim that 89% of their students passed the USMLE (licensing examination) on the first try, but Wikipedia reports that the real number is closer to 35%. The discrepancy is explained by the fine print -- the 89% only counts students who got 195 or higher on the Comprehensive Basic Science Exam. If your cousin has never taken science "in his life," then it's unlikely he will be in this group.
In short: it looks like your cousin has a pathway toward earning a "real" MD. But, becoming a board-certified, licensed doctor requires more than an MD, and that might be a real challenge. This could lead to a difficult situation where your cousin has spent a fortune on tuition and fees but cannot see much return on that investment.
Upvotes: 4 [selected_answer]
|
2022/12/28
| 4,464
| 18,337
|
<issue_start>username_0: As an example, someone who's not a native speaker of English might come up with the following paragraph (generated using a second-class translation service):
>
> Basic principle of gamification is providing of receipt of permanent,
> measurable feed-back from an user, providing possibility dynamic
> adjustment of user behavior and, as a result, rapid mastering of all
> functional possibilities of appendix and stage-by-stage immersion of
> user in more thin moments. Another method of gamification is creation
> of legend, history, provided with dramatic receptions, that
> accompanies the process of the use of application. It assists creation
> for the users of feeling of complicity, contribution to common cause,
> interest in the achievement of some invented aims.
>
>
>
One could then use GPT-3+ (aka ChatGPT) to rewrite the above into something closer to what a native English speaker might write:
>
> The fundamental idea of gamification is to give users continual, measurable feedback, allowing them to adjust their behavior dynamically and quickly learn all the features of the app. Another way of gamification is to create a story with dramatic elements that accompanies the user's experience. This helps to create a sense of involvement, contribution to a shared goal, and interest in achieving the set objectives.
>
>
>
Ignoring the question of how good of a job GPT-3 did here, is such use of neural networks acceptable in academia? All the original ideas are still yours but GPT-3 helps you convey them using shorter phrases and better English.
**Update**: Since the example I've provided generated a lot of debate, here's a slightly better original text with a correction provided by GPT-3:
>
> On enormous territory of park there are lakes, rivers, canyons and caves. Lake Yellowstone, one of the greatest alpine lakes in North America, is located in the center of Yellowstone caldera, greatest supervolcano on a continent. Caldera is considered a dozing supervolcano; he erupted with enormous force several times for last two million years. Greater part of territory of park is covered by hardening lava; in a park there is one of five existing in the world of the geyser fields.
>
>
>
Corrected to:
>
> On the enormous territory of the park, there are lakes, rivers, canyons, and caves. Lake Yellowstone, one of the greatest alpine lakes in North America, is located in the center of the Yellowstone caldera, the greatest supervolcano on the continent. The caldera is considered a dormant supervolcano; it has erupted with enormous force several times over the last two million years. The majority of the park's territory is covered by hardened lava; within the park, there is one of the five existing geyser fields in the world.
>
>
>
As you can see, an extremely poor text is probably a bad use-case for GPT, but a few errors here and there are well within its capabilities.<issue_comment>username_1: If the outcome really only includes your own ideas and they haven't been extended by the language model then it should be fine. Then it's like a German English translation but bad English into good English. This is all based on the assumption that writing a good English text isn't part of the rating.
Upvotes: 3 <issue_comment>username_2: *Edit: the question after the edit is so different from the original version that I wrote a completely new answer. My original answer is preserved at the bottom of the post.*
**1. New answer (added following OP's update to the question, which effectively turns it into a new question)**
Your added example shows the use of ChatGPT to improve a text written in intelligible, but unidiomatic, English. ChatGPT apparently outputs a grammatically flawless version of the same text, without any of the meaning having been altered in any way.
So, is it okay to use ChatGPT in such a way? Yes, absolutely — as I said before, it's exactly as "OK" as it is to use Google Translate, Grammarly and similar tools. That is, as long as you are not submitting the text as a class assignment meant to test your English writing abilities, in which case it would obviously be cheating. Otherwise, I can't think of any plausible objections anyone could raise here. The issues of accidental plagiarism, giving credit to the creators of ChatGPT and the texts it was trained on, etc, that were raised by some people previously, seem completely irrelevant in this scenario. All you are doing is improving grammar, and that's perfectly fine if indeed ChatGPT is capable of making such improvements.
To be clear, my analysis above pertains only to the types of usage illustrated by the specific example you posted in the updated question. I still have some doubts about whether putting too much trust in ChatGPT's capabilities is a good idea, and can imagine situations where a writer with a poor grasp of English language nuances would not realize that ChatGPT was making their text *worse* in some ways, or even modifying the ideas of the text or committing plagiarism. But that's a secondary concern that's not so pertinent for the current discussion. Ethically, there is nothing inherently wrong in using automated language improvement tools, as long as we remember that at the end of the day, writers are responsible for the content they are putting out into the world, in the same way that the driver behind the wheel of a Tesla running on (the misleadingly named) "Autopilot" mode is responsible for what their car is doing on the road. If ChatGPT messed up and you ended up putting your name to an ethically compromised text, that's on you. With responsible use, this can be avoided.
---
**2. Original answer and edits prior to OP's update to the question. This addresses the "bad" example in the original question and seems no longer relevant for what OP actually wants to know**
It's exactly as "OK" as it is to use Google Translate, Grammarly, or any of the other automated language-transformation tools that purport to turn all of us into budding Hemingways with zero effort on our part.
To be more precise, there isn't anything unethical about it, but as of the time of this writing, it doesn't work well enough to be anything but a complete waste of time (as your own example illustrates). So if by "is it OK" you mean "is it a good idea", the answer is no.
**Edit:** to be clear, it’s possible that some ways of using ChatGPT, when used with caution and some effort by a person with already reasonably good English language proficiency, may actually help the person produce writing of slightly better quality. See @anjama’s comments below. I make no claim one way or the other on this issue. My answer above was addressing OP’s specific suggested use case, as illustrated by their example.
**Edit 2:** I took a closer look at OP’s example input text for the “improvement” workflow. Honestly, the text is pure gibberish, and the idea that an intelligence, whether artificial or human, could penetrate through the poor writing and fathom the writer’s “actual ideas”, then express them more eloquently, is simply absurd. [Garbage in, garbage out](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out), as they say. I suppose you could interpret it as a sort of general prompt for someone to discuss gamification and what it’s about. If I had to rewrite it, I could use my own knowledge of what gamification is to write something plausible, but it would bear little relation to whatever it was that the writer of the text actually wanted to express.
To summarize, OP specifically asked for his premise not to be criticized, but the question simply makes no sense in its current form: either the premise is completely false, or the example OP used to illustrate the situation he had in mind is atrociously bad. In the absence of another, better example that lives up to OP’s premise, the question cannot be given a meaningful answer.
Upvotes: 7 [selected_answer]<issue_comment>username_3: As long as we are only limiting our usage of AI to paraphrase and rewrite our own text and original work (That is not plagiarised), It is not merely OK, but important and essential to leverage and utilise every tool available to you to improve your writing and the final output.
Your writing has to evoke interest, be precise, short and convey the ideas, findings and such with as much flair as possible.
AI tools can help non-native writers overcome language, idiom, grammar and syntax challenges. And this is welcome.
The caveat at the beginning of this answer stands at the end too - given the power of AI to create text based on prompts, it is challenging to say a clear yes or no. So, as long as we limit the use of AI for paraphrasing, it is a welcome use.
Upvotes: 1 <issue_comment>username_4: I would treat this approximately as I would treat proofreading by a human. If it results in substantial changes to the presentation, it should be mentioned in the acknowledgments ("Our thanks go out to the developers of GPT-4, which made numerous useful suggestions on the presentation of the ideas put forward in this paper. In addition, it spotted a crucial error in an earlier version of the proof of Lemma 5, without which Theorem 7 would not have been possible, and assisted us greatly in formalizing the proof of Theorem 8...."). But there is nothing *unethical* about getting feedback on a paper from anyone, irrespective of whether they are a person or a software.
Upvotes: 4 <issue_comment>username_5: It's as okay as is using Google Translate, automated spellchecking, finding synonyms with [MS Word](https://en.wikipedia.org/wiki/Microsoft_Word), etc. I don't see any downsides.
Yes, a person may appear more fluent than they are, but if they do it consistently what's the issue? Yes, it may hamper someone's language development, but that's their choice. Yes, it may give you an edge over competitors, but it's their own fault for not using all the tools at their disposal.
All the Luddites who think this is a bad idea will go the way of the dodo. Don't mind them.
Upvotes: 2 <issue_comment>username_6: **No!**
GPT does not understand the input it is given. You can have no confidence that GPT will not change a correct statement into an incorrect one, possibly in subtle ways that will be difficult to spot for our hypothetical non-native speaker. For example, in your sample text, GPT changed the word "permanent" into "continual," which doesn't mean the same thing. It also changed the first sentence from the original, which was about collecting feedback *from* a user, into one about providing feedback *to* the user.
You may argue that these changes make the text more correct. That is a reasonable argument. However, the reason GPT is able to do this is that it is trained on a broad corpus of texts. Not knowing any better, it will combine the input text it is given with appropriate selections from its training data, thereby incorporating elements that *are not the author's original work*. Unless these sources are identified (likely impossible) and cited appropriately, *this may constitute plagiarism*.
Upvotes: 4 <issue_comment>username_7: Generally speaking, the answer is an emphatic **no**.
The main reason for this *no* answer is as follows: the purpose of writing a paper is to demonstrate your learning and your work. Asking some tool or some automation to take what you can produce, then "improve" it, and finally present that "improved output" as "your work" is misrepresenting the reality. When it gets discovered that "your work" was not really yours, it will result in adverse consequences. These consequences will be much harsher than when your work was just not "good enough" but at least it is "yours". So, it is in your best interest to not mis-represent some one else's or something else's work as your work.
That said, it is *ok* to use ChatGPT or any other tool for improving *you*. The main reason is as follows: every tool that analyzes your work and provides a feedback for possible improvements is helping you in improving yourself. An improved you will produce a better quality work. So, one should use these tools to "learn", rather than "copy". That learning will, over time, result in a better quality output from oneself. By using these tools to "learn", you will have gained skills. By using these tools to "copy" you miss an opportunity to make yourself better.
Upvotes: 2 <issue_comment>username_8: >
> Ignoring the question of how good of a job GPT-3 did here…
>
>
>
I think a bit of a frame challenge is necessary here. If GPT-3 or some other language model was capable of preserving meaning exactly (not adding, not removing, and not altering) while making purely stylistic changes, that would be one thing. But that's not what the GPT family does. It produces convincing-sounding waffling that is in many cases not quite semantically equivalent to the input, or, when asked to answer a new question, not quite correct. In academia, the devil is really in the details. So while asking GPT-3 for writing help can be useful (just the other day, it helped me discover the term "pleading necessity"), copying and pasting its responses is a dangerous road to go down. Think of it as a very diligent but overconfident and intellectually shallow assistant.
Upvotes: 5 <issue_comment>username_9: I have not used ChatGPT, but I have used a different, less intrusive tool called [LanguageTool](https://languagetool.org/), in particular when I write German (to help with die/der/das/dem/den/des), but to a (much) lesser degree when I write English. LanguageTool is an automated checker for spelling, grammar, and style, but it also suggests how to reformulate a sentence, such that a strict language teacher might argue the text is no longer entirely my own.
Check the rules of the journal. Chances are they allow automated spelling, grammar, and style checkers, but not text generators such as ChatGPT.
Whatever you end up using, you should mention in the acknowledgements. You would thank a colleague for proofreading your English; the same applies to artificial intelligence or automated corrections based on other technologies. Such an acknowledgement also provides full disclosure, protecting you from allegations of academic dishonesty by people who don't think you should use those tools or that you must be open about it if you do. The terms and conditions of the tool and/or the journal may also require such disclosure.
For example:
*Parts of this text were processed using aspell/LanguageTool/ChatGPT/Deepl Translate/Google Translate/... to improve language.*
Upvotes: 2 <issue_comment>username_10: The answer for me is Yes,
Using neural network to "rephrase" your passage is acceptable in academia. As far as I know, no rule of journal states that you can't use a language tool to revise your paper. And it doesn't matter if that's ChatGPT or other NLP model.
But acknowledge that ChatGPT is called ChatGPT for a reason, no matter what the true intention is, using a model optimizing for dialogue to an academic scenario just because that is "closer to native English speaker" is truly not a wise idea.
Just be careful that ChatGPT may(actually probably) lose content that help convey your idea of original work, and the rest of the stuff is take your own risk and mention in the acknowledgements
Upvotes: 3 <issue_comment>username_11: Instead of transforming the text directly, you can ask it for points on how to improve the text.
That way, you can decide for yourself whether the points it suggests are worth incorporating into your text or not. It also reduces the probability of GPT messing up your original thoughts/arguments in the text.
When generating text from scratch or rewriting, you need to be very careful as it can make things up and/or change the meaning of what you originally intended. It can be similar to reviewing text from an inexperienced student who is good at bullshitting. If you are confident in your ability to spot the bullshit, fine, but that's your responsibility.
Ultimately, I don't think it is possible to ban the use of these language models, and every person needs to decide for themselves whether it creates value for them or not.
Upvotes: 3 <issue_comment>username_12: At the time of this writing, the answer is likely to be fluidly changing, and dependent on the exact journal in question. For example, it was reported today that the International Conference on Machine Learning (ICML) has banned the use of large-scale language model (LLM) programs such as ChatGPT in conference papers, writing in their newest call for papers:
>
> Papers that include text generated from a large-scale language model
> (LLM) such as ChatGPT are prohibited unless the produced text is
> presented as a part of the paper’s experimental analysis.
>
>
>
As a follow-up to public interest queries, the ICML wrote a [longer clarification](https://icml.cc/Conferences/2023/llm-policy), including the note:
>
> The Large Language Model (LLM) policy for ICML 2023 prohibits text
> produced entirely by LLMs (i.e., “generated”). This does not prohibit
> authors from using LLMs for editing or polishing author-written text.
>
>
>
However, it's debatable whether LLMs like ChatGPT are actually suitable for that purpose of editing and polishing text. For example, interviewed by the [Verge](https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper), <NAME> (AI research fellow, Mozilla Foundation) said:
>
> I see LLMs as quite distinct from something like auto-correct or
> Grammarly, which are corrective and educational tools... Although it
> can be used for this purpose, LLMs are not explicitly designed to
> adjust the structure and language of text that is already written — it
> has other more problematic capabilities as well, such as the
> generation of novel text and spam.
>
>
>
Upvotes: 1 <issue_comment>username_13: Let's avoid the discussion about the content/quality (of Chatwhatever AI) and let's stick to the formal part.
The worst thing you are doing is transferring some text that should be confidential (where "some text" is "your review") to a service over the web on which you have no control where/how is storing and accessing said text.
Upvotes: -1
|
2022/12/29
| 2,464
| 10,251
|
<issue_start>username_0: I took a course, with 2 term exams (20% each) and a final (30%). I achieved close to the highest score on the first term exam, the highest score on the second term exam, and the highest score on the final exam. In all cases, I was significantly above the average score (by >20 points). But I did rather poorly on the assignments, so my final score was 420/600. The average score was 250/600, and the class is graded on a curve.
The Professor gave me a final grade of B, instead of an A, claiming that a different student (caucasian) had a higher pre-final average than me by 40 points. This is possible, due to the assignments, but does not take into account our performances on the final exam, which was very difficult. The other student has not yet taken the final exam due to "external reasons" (so, it's possible no one will get an A in this class). Can I accuse the professor of discrimination, as I feel is the case?
How can I handle this?
Additional details:
* The "pre-final" score is a weighted average of Exams and HW
* I may have received the highest grade in the class. The student who didn't take the final has not yet received a grade.<issue_comment>username_1: Before you accuse anyone of anything, get facts. Avoid confrontational language like “accusing”, and instead “seek clarifications”. Establish for certain the facts that you have laid out, *i.e.* make absolutely sure someone really got an A without completing the final exam. There could be rumours and incorrect information circulating so you need to be sure about your facts.
If you need additional clarification, if your final grade is not as per course outline or other usual scale of assessment, consider an appeal. In most universities there is a formal procedure for such appeals, often with various stages, eventually escalating to deans and/or review panels.
Before you “accuse” an instructor of discrimination, make sure you have followed due process else the system will ignore your complaint.
Upvotes: 3 <issue_comment>username_2: Your professor gave the other student the final grade before he/she even takes the final exam. In my opinion, that should haven't been done. However, it is not necessarily a violation of your institution's code of conduct, which you will have to check it yourself.
At many institutions including mine, the exam structure is totally at discretion of the course instructor. That means that they may change it if it is really necessary (e.g. adversarial situations), even for a single student. They can change the mode of the exam from written exam to oral exam and vice versa. Thay may grant exam exemption based on students' performance (e.g. regularly submitting homework with excellent results; class participation;...).
However, the main point of my answer (this part) is to address the question whether or not it was unreasonable that he gave you a B (for 420/600) while giving the other student A. You mentioned that the final exam accounted for 30%, which was 180 points. Before the final exam took place, his/her score was 40 points higher than yours. It implies that he/she could earn up to 180 points at the final exam (560/600 which is 140 points higher than yours), so **it was not too unreasonable to give the other student a higher score than yours** given his/her past performance.
Also, as other people pointed out in the comment section of username_1's answer, 420/600 (~70%) seems not to be a high score for an A. However, I don't have information about the grade distribution of the course, so I cannot give any opinion on whether or not he/she or you deserved an A.
Upvotes: 1 <issue_comment>username_3: I agree with @username_1 entirely, but I'm going to take a slighty different approach here and focus on the way you have phrased your question. You use the word ***discrimination***, and it is that word that is worth examining.
Discrimination often has two faces: discrimination *against* someone or something (perhaps on the grounds of sex, race, personal dislike, disability, or age), and discrimination *in favor of someone*, or *for* someone, (again perhaps, on the grounds of sex, race, favoritism, disability, or age). Your tone suggests that you think you might have been discriminated against; you feel hurt and as if you have been treated unfairly. However, even if some kind of discrimination has occurred (as yet unproved ... to quote @username_1, "get the facts"), it might be that the discrimination has been exercised in favor of the other person rather than against you.
The distinction is important. For example, in my university, the entrance examination grades of student applicants from low-socioeconomic backgrounds are given an automatic boost of 5 points. However unfair that might seem to a student who missed out on a university place by 1 point because they were beaten by a low-SES student whose original unboosted grade was 4 points lower, the policy is *intended* to exercise a positive-discrimination bias in favor of low-SES students, rather than to punish high-SES students. Similarly, at many universities, a student with who is registered as having a disability will be given more time to complete an examination, even in the absence of proving that the specific disability would have had an impact on performance in the exam!
Discrimination in all it's guises, positive and negative, often seems unfair to at least *some* people; but it is a deliberate and lawful element of a great deal of public policy in the United States, Canada, Australia and the United Kingdom.
My intention, with this somewhat discursive answer, is to point out that unless you know the policies of your university in details, and unless you know the circumstances of the other student in details (which is highly unlikely), then *any accusation* of discrimination is likely to be on very shakey ground.
Following on from @username_1, your first step might be to have a longer conversation with your professor, and ask exactly how your mark was calculated. You could also ask, without reference to any other student, whether that process applies equally to all students, or whether the university policy includes any positive discrimination aspects.
Upvotes: 1 <issue_comment>username_4: In summary, I think you should file a grievance with the department. I assume that your professor is tenured, so the likely outcome is that he will receive some feedback / counceling on how to do his job better in the future.
Now the gory details..
I don't like grading on a curve. I think it's inappropriate for most classes. If the same material is taught every semester, but in some semester the students are so strong that almost everyone deserves an A, but in some other semester the students are so weak that almost no one deserves an A - so be it. Grade the students based on their own work, not that of their random cohort. Much of your frustration seems to stem from the use of the curve. I hope that your professor will stop using the curve in the future.
As other people pointed out, the numbers that you posted show that although you did well on the exams, you didn't do any homework, which is part of the grade. A lot of people (myself included) don't like students who skip the required homework. So if some student's final grade were borderline B+ or A-, and the student did not even attempt the homeworks that are part of the grade, then I'd definitely go for the lower grade. Frankly, it sounds to me very much that your professor simply decided to penalize you a whole letter grade for not doing the homework, which to me sounds excessive. This is why I suggest that you use the process for appealing your final grade, which your department undoubtedly has. You may get your grade changed to B+ or even A- if indeed you got a B because you skipped the homework. And don't skip the homework in the future!
>
> Professor informed me that student did not take final due to "external reasons", which may be personal.
>
>
>
Your professor really shouldn't be discussing other students with you! If not for the curve, I'd just say that other students' performance should not concern you. But it does, because of the curve; and your professor may have broken some Federal rules by telling you too much. Here is some free training that would benefit him: <https://studentprivacy.ed.gov/training/ferpa-101-colleges-universities> :) Now the other student appears to have solid grounds to complain that the professor victimized him and violating his privacy by discussing him with you.
It's possible that the professor treated the other student better than you because the other student did the homework and you did not. This sounds most plausible.
It's also possible that the professor treated the other student better than you because the other student is caucasian. I see no evidence here of anyone's race being relevant, but you're free to make this allegation to the department, and see how that plays out. But practically, you're better off focusing on your grade being too low, rather than on someone else's grade being higher than yours.
When I was an undergraduate, many decades ago, it was quite common for a course syllabus to say that students who turned in all the projects and scored better than some threshold on term exams could skip the final and get an A. It seems to be less common these days and many departments actually have grading policies requiring that everyone take the final.
When I teach, it seems that every semester some students don't take the final with the rest of the class because they have some disability requiring them to have "double time". So they take the same exam as everyone else, but with a separate proctor, and get more time to work on it.
And some students can't take the scheduled final (for example, I once had a very pregnant student who went into labor on the day of the final), and take it later. Schools have procedures for that.
My point is that if you don't see some student taking the final with the rest of your class, you should not assume that he's getting some preferential treatment. But your professor is definitely wrong to discuss the other student with you.
Upvotes: 1 [selected_answer]
|
2022/12/29
| 790
| 3,224
|
<issue_start>username_0: I am a university faculty supported by [this](https://www.simonsfoundation.org/grant/simons-collaborations-in-mathematics-and-the-physical-sciences/) Simons Foundation research grant. Who decides on the spending rules for this grant? Can my universty impose any rules, for example acomodation cost limits or a requirement that any use of funds requires a justification as to how it "benefits" my university?<issue_comment>username_1: Generally speaking, with possible exceptions, spending rules are decided by the funding agencies, but universities may impose additional constraints within the rules set by the agencies.
For instance, I participate to several European projects funded by a certain agency. When travelling, participants to the same project from different institutions have different travel and subsistence limits, and these are set by the institutions. The differences in these limits across the institutions can be fairly large, and the limits may be even different within the same institution depending on the rank of the people travelling.
Furthermore, yes, any use of the funds within a certain grant typically needs to be justified. For instance, if I travel to a conference and I want the travelling expenses to be covered by that grant, I'm required to submit at least a paper acknowledging the grant.
When buying instruments, the institution can set the rules according to which an instrument can be bought and in which way its operating time within the project should be recorded.
Some institutions might have relaxed constraints with respect to others, but generally don't expect to be able to freely dispose of your grant money without justification. And the funding agencies may send revisers years later to verify that the grants had been used according to the rules.
Upvotes: 3 <issue_comment>username_2: As [@Massimo\_Ortolano](https://academia.stackexchange.com/a/192027/104266) remarks, universities frequently impose constraints on how grant funds may be spent that are over and above the constraints imposed by the granting agency.
That said, the constraints imposed by the Simons Collaboration itself are in fact detailed on the very [page](https://www.simonsfoundation.org/grant/simons-collaborations-in-mathematics-and-the-physical-sciences/) to which you provided a link. There are within-page links, one detailing the things on which funds are permitted to be spend ([Allowable Expenses](https://www.simonsfoundation.org/grant/simons-collaborations-in-mathematics-and-the-physical-sciences/#body-c5)) and the other, explaining what should happen to [unspent funds](https://www.simonsfoundation.org/grant/simons-collaborations-in-mathematics-and-the-physical-sciences/#body-c7) at the end of the grant period.
The information headings (i.e., "allowable expenditure" and "unspent") funds are fairly typical of many granting agencies. More specifically, if you locate the website for a granting agency, then you might be able to search from within the agency itself for those headings, or alternatively use Google with a search such as:
>
> site:thegrantingagency.gov "unspent funds" OR "allowable expenses" OR "allowable expenditure"
>
>
>
Upvotes: 2
|
2022/12/29
| 921
| 3,658
|
<issue_start>username_0: I made several plots and figures. About 20% of data I used to make a plot came from papers of other people, which I'm surely going to cite in a common manner. But the other 80% percent of data I collected on my own during the research, so I'd like to point it out in the figure's caption (otherwise a person who reads my article would have gone to the cited paper and found there only 20% of data).
What is the proper way of doing this? Should it look like that (example picture is taken from [here](https://r-charts.com/correlation/scatter-plot-group/)):
[](https://i.stack.imgur.com/w6Ey6.png)
>
> Fig. 1. Dependence of Y on X within different groups [123, own dataset].
>
>
>
where:
* [123] is the ordered number of an external dataset reference in the reference list;
* [own dataset] - an attempt to reference my own data.<issue_comment>username_1: I suggest you to have a section at the beginning in which you describe the datasets used in the work, referencing other work's datasets and presenting your own datasets with the methodology used to collect them.
Label or number each dataset, so that you can then later refer to the datasets in figures and tables. For instance, your caption could become
>
> Datasets D2 and D5.
>
>
>
or
>
> Datasets D2 [123] and D5.
>
>
>
Upvotes: 1 <issue_comment>username_2: I assume you describe the dataset elsewhere in the paper (including data collection methods, or, if you're using and analyzing a data set published elsewhere, at minimum the data analysis methods) like username_1 [suggests](https://academia.stackexchange.com/a/192031/17254). If feasible, I would suggest clearly labeling the legend entries to make the data sources clear, e.g. instead of "Group 1", "Group 2", and "Group 3" you might have "Isaev (2003)", "Someone else (2012)", and something like "This work" (or "this study"). This way the fact that the data comes from different sources gets communicated clearly even if the figure is displayed without the caption, which might well happen in talks etc.
Often more informative labels are useful, e.g. if there are datasets collected using different techniques or multiple datasets from the same source. I would slightly caution against legend entries using numerical references such as "Isaev [3]", where [3] is some entry in the list of references. I've done so in the past, and unless you automatically generate the figure the numbers can easily end up pointing to the wrong references as as citations are changed in the main paper. It is better to include such direct citations in the caption, just as you suggest. However, there is little reason to put something like "[own dataset]" in the caption, especially if you have "This work" in the legend. You may well want to devote some space in the caption contrasting your and others' results, for which phrases such as "our results" may be useful, but generally what isn't explicitly referenced is assumed to be your original work. At least, that's the case if it is a reasonable assumption in light of the rest of the work.
Upvotes: 2 <issue_comment>username_3: Unless there is a strong reason for doing otherwise, I would suggest that you make your data available on a public (open) data repository such as [figshare](https://figshare.com/). The dataset can then be referenced with an appropriate URL or DOI, in addition to describing your own data exactly as the other answers suggest.
I mention figshare here only as an example. There are many other excellent open data repositories that you could consider.
Upvotes: 0
|
2022/12/29
| 922
| 4,103
|
<issue_start>username_0: If another PhD student created a dataset, can I use it for my own PhD thesis, with permission and attribution, and not collect any data myself? Or it is usually expected that a PhD student should collect at least some data?<issue_comment>username_1: Yes, in principle there is no issue with not generating your own data. The general requirement is that a PhD provides an advancement of knowledge in a particular field, not necessarily generate novel data.
However this is likely to be subject-dependent. My PhD was based on developing a new statistical method to analyse data, and for that I applied to data other people had collected, not data I had generated. However, if your field was experimental plant sciences, I imagine there may be an expectation (but not necessarily requirement) to collect your own data.
Upvotes: 5 <issue_comment>username_2: There is no ethical problem with using data from others with permission and attribution. However, what is acceptable in a dissertation is up to your advisor and your institution.
Since one major purpose of a PhD is to give you training and experience in research, some will expect that collecting the data yourself might be required. There are problems that can arise in collection that you avoid by using the data of others in some cases.
Work this out with your advisor if the question isn't purely theoretical. The research question itself might be important enough that the source of the data is less of an issue. There are lots of possibilities.
Upvotes: 4 <issue_comment>username_3: You should be fine not collecting the data and instead use widely used data if there is no regulation against it (which generally does not). Moreover, using others' data has a good side that might increase the impact of your work.
First, people in the field will know right away what to expect. For example, in the digit recognition work, using MNIST data would be the no-brainer choice for this task as people in the CS would have known it by heart. Or if it is an RNA sequence, you might not need to do sequencing again if it is on NCBI.
Second, using the already existing data would benefit you in benchmarking your method. This includes the performance of your method and the correctness of your work. Now, given that you are using another Ph.D. data and assume that it is not yet published or it is not a well-known dataset. If your results are in the same direction, that will tremendously increase your work and another Ph.D.'s work impact.
That said, you might need to look into the purpose of the data you are using and whether it will take the crucial part of your work. Nevertheless, please review with your advisor and university to get a more decisive answer on whether the data inclusion is ok.
Upvotes: 4 [selected_answer]<issue_comment>username_4: Yes. Also: depends on your PhD.
Whole areas of physics rely on data retrieved by someone else (or layers of "someone elses"). Particle physics is one of them.
I used the experimental data of other people because I am not interested in the experimental part and they were interested in the theoretical / analysis one. We made a few papers together.
I wrote a paper after asking someone else for their data (they published an article based on them) because I wanted to analyze them differently. That person did not want to be a co-author but was happy to share their data (and more).
I gave the data I received to another PhD student so that she does not lose time gathering more.
All of this, **of course**, assumes that the exchanges and relationships are clearly acknowledged (or co-authored).
And then comes the second sentence of my answer: your PhD may require you to actually retrieve your data. I do not think that this is a hard rule but the spectrum of research is so wide that everything is possible.
Upvotes: 0 <issue_comment>username_5: Ph.D. theses, like other publications, can use data other people collected. A meta-analysis or meta-study is a statistical analysis combining the results of pre-existing published scientific studies.
Upvotes: 0
|
2022/12/29
| 1,342
| 5,587
|
<issue_start>username_0: (A bit difficult to describe what I mean without sharing my definitions, but I am cautious of posting them since this is regarding my thesis.) I had to provide a definition for a term I came up with. Since it is a complex matter, I provided one definition and then wrote "alternatively put:" and gave a second definition that describes it a bit differently and more elaborate. My advisor just put a question mark. I thought of merging both definitions together, but they are more or less different "views" of the term, so it didn't make sense. Is there an academically adequate way for me to keep both definitions of the same term?<issue_comment>username_1: If they are "views" then they are hardly "definitions". A definition should be precise. If you want to give two statements, which might occur in math, you need to be able to prove equivalence.
If they merely overlap in meaning then there may be cases in which one applies and the other doesn't. In informal language that is probably fine, but in a dissertation that is unlikely to be the case. It is hard to "define" *goodness* for example.
Note that something like a metaphor or analogy isn't a definition. A definition needs to distinguish what is and what is not conformable.
I'll go along with your advisor and put a question mark.
You can choose one, perhaps, or unify them formally, but two is too many.
If the same term is used in your field in two distinct ways then it may be defined differently for different aspects. But in such a case you will need to be extremely clear when you use one definition or the other. What is it that distinguishes them?
Upvotes: 5 [selected_answer]<issue_comment>username_2: Echoing @username_1's good answer, and emphasizing a certain point: even if there is no truly "correct" order of logical development of the ideas (meaning what is a definition, versus consequence of definition), you probably need to *pick* *one*... Then perhaps *prove* that another "definition/characterization" is equivalent, or, possibly, is equivalent when both characterizations apply(?).
After proving such an equivalence, then we can forget the (possibly artificial) logical ordering that we used at the beginning (to avoid circular reasoning...)
Upvotes: 4 <issue_comment>username_3: I guess you can give multiple definitions, as long as you then explain why there are multiple definitions. For example, I can imagine if you are talking about "fatigue" you could say "for engineers, fatigue means... But when sports scientists use the term, they mean... The difference is interesting because...."
Basically, you don't want your readers to be puzzled by what's going on. If your supervisor (who is an expert) is puzzled, then everyone else will be too. Have a bash at rewording it, and run it past your supervisor again.
Upvotes: 2 <issue_comment>username_4: A definition must be comprehensive and unambiguous.
1. If you need a second definition, it can mean that your first "definition" is still ambiguous, that you still need to be more precise. In that case, it means that the first "definition" was actually not a definition. So it would not be okay to have two definitions. Work harder on the first definition.
2. Another case is if you have readers from very distinct fields who have distinct jargons which makes that some of them who aren’t familiar with the jargon of the first definition (assuming it is a true unambiguous definition) don’t really understand it. In that case, a second definition can be interesting as long as it’s just a sound paraphrase. But the best solution remains to find a definition whose words are understandable by everyone.
**TL;DR. It’s always a bad idea to have two definitions.** However, it can be interesting to present equivalences. That is the fact that one term in a specific field corresponds to another word in another field. This can allow to jump from one field to another field and connect knowledges together.
If you want to help people better understand what your object is by explaining things about it, by stating some consequences of its existence, or by telling its origin, etc. it doesn’t stand in the "definition" part but in the rest of your text parts (etymology, origin, consequences, corollaries, theorems, etc. as it depends on your working field).
Upvotes: -1 <issue_comment>username_5: Here is a familiar example of how to write *equivalent definitions*,
**Lemma 1.3.4:** For any set $A \subset X$ the following two conditions are equivalent:
(a) The complement $X-A$ is open in $X$.
(b) Any sequence $(x\_n)$ in $X$ with limit $x$ that has all $x\_i \in A$ also satisfies $x \in A$.
**Proof:** Yadda yadda yadda QED.
**Definition 1.3.5:** The set $A \subset X$ is said to be closed in $X$ to mean it satisfies one and therefore both of conditions (a) and (b) above.
Upvotes: 3 <issue_comment>username_6: What field is this? If you're doing qualitative work, it wouldn't be out of line to provide an elaborated definition and a more concise (TL;DR) version of the same concept. The shorter definition may be shorthand for the former. The shorter version could also be a more abstract concept that would meet with wide agreement (and be easy to grasp, maybe even by non-academics), while the longer version is a detailed description of exactly what does and doesn't fit into the concept. Indeed, when doing quantitative analysis in the social sciences, it's very common to give a conceptual definition, followed by a definition that operationalizes the concept for the analysis in question.
Upvotes: 2
|
2022/12/30
| 1,485
| 6,187
|
<issue_start>username_0: I finished my Bachelor's degree this year. I worked on some research with my advisor and after a few months, we decided that there's enough interesting research to be able to write a paper. This will be my first paper. He told me that I'll be the first author and asked me to start working on it. After the initial excitement wore off and I sat down to actually start writing the paper, I was, and continue to be, plagued with self-doubt and confusion. Following are a few aspects I find daunting about the process of paper writing:
* I am suddenly unsure if my results are even correct or sophisticated enough. I feel worried that I have either made a mistake somewhere or that my results are incomplete and there's more to be done.
* I am intimidated by the prospect of writing the introduction, which is usually a high-level literature overview. Although I have read multiple papers for my work, and based my project on a seminal few, I am worried I won't be able to describe the state-of-the-art in the field well enough or forget to cite an important paper that I don't know about.
* My writing's not academic enough.
I did tell my advisor about this, and he assured me that he understands and asked me to share the manuscript with him. But I cannot bring myself to do it because a) it's very incomplete and b) I feel worried that he'll go through it and be disappointed in me. I know it's probably irrational but I cannot help feeling this way.
These feelings have severely hindered the progress of my paper and research. Is it normal to feel this way and how do I tackle them and get on with writing my paper?<issue_comment>username_1: Do not worry!! This is 100% normal. Trust your supervisor - they are an expert and they think your work is good.
Don't worry about being sophisticated. No one really wants to read sophisticated stuff anyway - given a chance to read a simple, interesting paper and a complex "sophisticated" one I know which I would prefer.
And also, don't forget that whatever you write and submit to a journal will then be looked at by the editor and a couple of peer reviewers whose job it is to a) check it's good and b) suggest ways you can make it even better.
"Academic writing" is often dull, uninspired and unreadable. Just write as clearly as you can. Your readers will thank you for it. Don't try to be "academic" with it.
You got this - don't stress - trust the process!
Upvotes: 3 <issue_comment>username_2: Congrats for the opportunity of writing your first paper!
>
> I did tell my advisor about this, and he assured me that he understands and asked me to share the manuscript with him.
>
>
>
If your supervisor gives you this opportunity, and wants to sit down with you on the manuscript, definitely take advantage of it and do it. That's what supervisors are for - they serve as mentors. As long as you don't get any input from outside, it's hard to improve. If you take the opportunity to learn from them, you will automatically improve your academic writing style - which is difficult to get into in the first place.
The fear that the state of your manuscript will reflect badly on you is therefore, in my opinion, somewhat unfounded. If your supervisor trusts you enough to write a paper, they already have a good opinion of you and will certainly not lose faith in your abilities because of an incomplete paper.
Upvotes: 3 <issue_comment>username_3: Let me start by pointing out that the usual way to learn how to write an academic paper is to write a really bad draft, get it critiqued by a supervisor, to rewrite it to make it slightly less bad, get it critiqued again (or even directly rewritten, if there is an imminent deadline); and to then repeat the process a couple of times. You have done Step 1 ("Write a really bad draft"), and proceeding to Step 2 ("Give your supervisor the chance to tell you what is wrong is it") is the natural next step.
To your specific points:
1. Worry that the results are not sophiscated/interesting/impressive enough.
You aren't really qualified to judge this. It is common to feel that since you understand your own work, whereas what other people have done is harder to comprehend, your own work must be too simple. Ignore this feeling. If your supervisor thinks the results are enough to try and publish, wait and see what the referees say.
2. Worry that your results are wrong.
This one is a good worry, as long as it is productive. Double-check, triple-check. Include the steps you have taken to be absolutely sure in the draft.
3. Worry that your results are incomplete.
It is very rare to have the ultimate answer in any one paper. If you have concrete ideas, discuss with your supervisor whether it is worthwhile pursuing them before publication. If you just have the generic feeling that there is more to be done, ignore it. It's probably right, but it doesn't matter.
4. Writing the introduction
There are two ways to handle writing the introduction: A. Write a sloppy one at the start, then write the paper, then completely rewrite the introduction. B. Write the introduction last.
Definitely don't try to write a good introduction early on in the process, that just causes writer's block. Moreover, the introduction is where you put your stuff into the broader context. Naturally, the introduction will be the section where it makes the most sense to have your supervisor write it (if it is a co-authored paper). So don't worry about this, and don't let the remaining worries derail or delay writing the rest of the paper.
5. Writing style
The one experienced that helped me develop my own writing style the most was writing a draft, and then reading the complete rewrite a coauthor of mine did to it. I could still see the thoughts I had brought to paper, but they were expressed much clearer, much more elegantly. By studying the changes I learnt a lot. So, going back to what I wrote first: Yes, your style may be bad. Your supervisor is going to expect a bad draft. You'll get better by getting feedback and learning from it, which means you have to let your supervisor read the bad stuff you have written.
Upvotes: 5 [selected_answer]
|
2022/12/30
| 1,801
| 7,492
|
<issue_start>username_0: *It is about my friend (an undergraduate student in Australia), but I used first person pronoun "I" for convenience. My friend needs advice, and I tried my best to describe the situation. The second-person pronoun "you" can be used in the answer.*
After taking an undergraduate course taught by professor A, he offered me an opportunity to do research with him. At that time, I wanted to pursue a Master by Research degree (MPhil) after graduation, so I accepted the offer as having some research experience would be helpful for my graduate admission. However, it was an unpaid job. I was living in Melbourne, and the living cost was really high here. I had to work part-time to cover the living cost. As the result, I did not spend enough time on doing research. I decided to quit.
Later, professor A managed to find a source of funding for the research project, and he offered me a new opportunity to work with him. He really wanted me to do MPhil and PhD at the same university after graduation. It was a paid RA job. I decided to quit the part-time job, and focused on doing research. We have not achieved any significant result since then.
Unfortunately, due to COVID-19, the university faced financial challenges. They decided to cut funding for research students. It has become extremely difficult for potential MPhil students to get the research scholarship (tuition fee waiver + stipend) unless students have a publication(s) prior to applying. His field of research is also not a priority of the university.
As the result, he has worked so hard to get me opportunities to participate in research projects from other research groups at the university. He always suggests me new topics and upcoming conferences, telling me that I should try to publish something there. There was one occasion when he told me to submit a manuscript to a conference when there were literally only 3 weeks left before the submission deadline, and I had from zero because we had not done any research on that topic before. (2) Of course, I did not manage to do it and wasted my time.
I have completely lost the motivation to do research, and I no longer want to work in academia. I have decided to quit again. However, just a few days ago, he managed to get a new research opportunity for me. He told me that I would work with a research group in United States, and they were going to submit a manuscript next year. Now, he sends me emails everyday talking about that project, and I am so anxious everytime I see new emails from him.
I want to refuse the offer, but I don't want to hurt his feelings. What should I do in this situation? How to tell him that I want to quit?
---
Additional details:
(1) I don't have any other job offer at the moment. I am thinking of enrolling in a master by coursework program after graduation instead.
(2) The stress affected my mental health. I thought that he had done so much for me that I should not disappoint him, but I also knew there was no way I could submit the manuscript in time (within 3 weeks) because the topic was totally unfamiliar for me.<issue_comment>username_1: (*one thing is missing in your question: Do you have another opportunity? like a job offer*)
There are two options for you:
1. Continue working with your professor with no motivation. You might regret this for your entire life but it might also work out.
2. You get the courage to kindly tell your professor that you want to quit by stating your reasons. He might not like it of course because he needs to find someone else but he would understand. Even if you are happy and you get a better opportunity, he would understand because, in the end, he wants you to proceed with your career. This is also a success for him.
If I were your professor, I would prefer to let you go and keep a good relationship with you rather than have you in my team without having the motivation to accomplish your tasks.
Now, if you take the second option, it is better not to do it via email but after scheduling a meeting with him and informing him about your decision.
**EDIT after "Additional details"**
now it becomes opinion-based I think. If I were you I would keep my position (and perform my tasks properly) until I find something else or get enrolled in a master's program. However, if the stress is not affordable, I would quit ASAP. Your health comes first.
Upvotes: 3 <issue_comment>username_2: Is the new research opportunity paid? what is causing the stress? lack of funding?
This also comes down to whether you want to pursue a career in research. If not, then just let your supervisor know. The problem will become more and more serious over time; i.e., a supervisor or student should rip-off a 'band aid' quickly.
If you want a career in research, then you should let your supervisor know to see whether he/she has a solution to your problems/worries. If funding is an issue, then your supervisor needs to fix this. If not, then you will need to go on a different path.
If there is funding, then you and the supervisor should decide on a work plan that is a win-win for both of you.
Upvotes: 2 <issue_comment>username_3: Only you can decide what to do with your life. But after reading your story, two pieces of advice:
(1) The question is a bit ill-defined when your other option is "I don't know." It's like asking "Should I stay on a sinking ship?": it's easy to say no, but if the alternative is jumping into shark-infested waters, then the answer is less clear. So, I recommend you do your homework, figure out what your plan would be if you quit your RA spot, and then make a decision between two concrete options.
(2) Your professor seems well-meaning but incompetent. Telling an undergraduate student to come up with an idea, research it, and publish it within 3 weeks does not show good judgment. Especially after months of no progress on a different problem. And that he is now sending "daily e-mails" about this new opportunity seems very sporadic. He is clearly well-intentioned, and perhaps also desperate to make progress for the sake of his own career. But at this point, I would have serious concerns about accepting him as an advisor. And of course, you should not make decisions based on pity or a sense of obligation; that will not end well.
Upvotes: 6 [selected_answer]<issue_comment>username_4: I had a similar situation after my master degree, albeit less extreme but it stressed me out quite a lot for a few months.
I knew deep down that trying to do research would be a miserable life: low pay, constant stress because I'm a huge procrastinator. Also I didn't have the brilliancy and social aptitude to be good at it.
But yeah, I got propositions from a few professors to start a PhD program. Lots of back and forth, them trying to convince me it would be a good choice, but ultimately I noped out of it while not even having another job in sight.
In a sense you are in a better place than me as it's clear you really don't want to pursue a career in academia. For me it was more like, yeah well it looks nice and interesting but I'm really not sure I will get out of it alive.
The few minutes it will take to write an email explaining your decision and send it will be painful, granted. But far less painful than becoming even more involved with some project you don't care about. Occasionally hurting people's feelings (assuming this professor has really put a lot of effort into this) is part of life.
Upvotes: 2
|
2022/12/30
| 3,298
| 12,336
|
<issue_start>username_0: In the question [Is it OK to use GPT-3+ to rewrite your own paper to have better English?](https://academia.stackexchange.com/questions/192013/is-it-ok-to-use-gpt-3-to-rewrite-your-own-paper-to-have-better-english/192076#192076) @gerrit mentions that:
>
> Check the rules of the journal. Chances are they allow automated spelling, grammar, and style checkers, but not text generators such as ChatGPT.
>
>
>
I've tried to Google for examples of journals with an explicit policy on the use of GPT-like language models but couldn't find any. Given that GPT-3 launched over 2 years ago, have any journals created a policy on the use of such models as an aid for writing papers? I'm not looking for an exhaustive list obviously, just one or two examples of such policies.
Please avoid linking to opinion pieces on what editors "should" do, I'm only interested in binding policies announced by specific journals.<issue_comment>username_1: Does this count?
>
> **Originality and plagiarism**: The authors should ensure that they have written entirely original works, and if the authors have used the work and/or words of others, that this has been appropriately cited or quoted.
>
>
>
From [Elsevier: Ethical publishing](https://www.elsevier.com/authors/policies-and-guidelines). It should cover ChatGPT: if an AI wrote it, it is not your work. I took Elsevier as an example, but basically all publishers have a similar provision.
Upvotes: 3 <issue_comment>username_2: Yes, the [International Conference on Machine Learning (ICML)](https://icml.cc/) recently added an explicit policy applicable for the [2023 edition](https://icml.cc/Conferences/2023) of the conference. While strictly speaking not a journal, I think it is still relevant to this question since top-tier conferences in computer science, like ICML, are [comparable](https://academia.stackexchange.com/q/38086/68109) (and often better) than journals in terms of quality, prestige, and selectivity.
The [policy](https://icml.cc/Conferences/2023/CallForPapers) is as follows (emphasis mine):
>
> ### Ethics:
>
>
> Authors and members of the program committee, including reviewers, are
> expected to follow standard ethical guidelines. Plagiarism in any form
> is strictly forbidden as is unethical use of privileged information by
> reviewers, ACs, and SACs, such as sharing this information or using it
> for any other purpose than the reviewing process. **Papers that include
> text generated from a large-scale language model (LLM) such as ChatGPT
> are prohibited unless these produced text is presented as a part of
> the paper’s experimental analysis.** All suspected unethical behaviors
> will be investigated by an ethics board and individuals found
> violating the rules may face sanctions. This year, we will collect
> names of individuals that have been found to have violated these
> standards; if individuals representing conferences, journals, or other
> organizations request this list for decision making purposes, we may
> make this information available to them. Details of this guideline
> will be published on the website.
>
>
>
Upvotes: 3 <issue_comment>username_3: >
> Are there examples of journals with an explicit policy on GPT-3 and equivalent language models?
>
>
>
Yes, e.g. ACL 2023: <https://2023.aclweb.org/blog/ACL-2023-policy/>. ACL also mentions their stance on code-generation programs.
Another example: CEUR-WS proceedings. <https://ceur-ws.org/ACADEMIC-ETHICS.html>:
>
> In the past few months, we have witnessed the emergence of novel large language models (LLM) reaching breakthrough performance on NLP tasks. These include ChatGPT and Galactica, which are AI assistants that can produce long and good quality content that can be seeded for authors’ work. Because of their recent emergence, the norms around the use of such technology is not fully established, yet. Hence, it is important to acknowledge its use and elaborate on how it has been employed.
>
>
> Specifically, we define three levels of AI assistance usage: *insignificant*, *low* and *substantial*. We will group the different use cases according to these three categories and we will define CEUR-WS stance.
>
>
> **Insignificant.** Activities like: i) paraphrasing and refining the manuscript content (using Grammarly or other spell checkers), and ii) smart composition (via predictive keyboards) are widely accepted and do not need any acknowledgement.
>
>
> **Low.** The use of AI tools for searching and generating literature review is acceptable upon authors’ checks. Authors must review the content and adjust/add references to line up with the narrative of their manuscript. In case of generating unoriginal content (i.e., definition, or description of well-known concepts) may be acceptable provided that the authors have checked it to be accurate and included proper references to the original content.
>
>
> **Substantial.** Using AI assistants for generating new ideas as well as new text is unacceptable. Most of the generated content may derive from existing work. Potential issues with such practice are related to originality, plagiarism, ownership, and authorship, whose consequences and impact are not yet clear.
>
>
> Regardless of the cases above, CEUR-WS publishes original work from named authors, and thus contributions from AI assistants can only be stated in the acknowledgements and/or by suitable references at the original research papers. We require that all authors and workshop editors adhere to these guidelines. Their violation will lead to the removal of the published paper or the whole volume, similar to our procedures dealing with plagiarism.
>
>
> As this technology is in current development, we plan to continuously review this policy in the upcoming months.
>
>
> This policy section is partly inspired by the “ACL 2023 Policy on AI Writing Assistance” available [here](https://2023.aclweb.org/blog/ACL-2023-policy/).
>
>
> Related documents:
>
>
> 1. [US Copyright Office's Guidance on AI-Generated Material](https://www.copyright.gov/ai/ai_policy_guidance.pdf) (2023-03-16)
>
>
>
Upvotes: 2 <issue_comment>username_4: By now, several major publishers have implemented policies on GPT-like models. Exactly what they cover varies, but there is at least a consensus that such models cannot be listed as [co-authors](https://www.nature.com/articles/d41586-023-00107-z). Policies allowing the use of such models do require the use to be properly disclosed and documented.
In a Jan 24, 2023 [editorial](https://www.nature.com/articles/d41586-023-00191-1) Springer-Nature announced their [policy](https://www.nature.com/nature/for-authors/initial-submission):
>
> **Authors**. Corresponding author(s) should be identified with an asterisk. Large Language Models (LLMs), such as [ChatGPT](https://openai.com/blog/chatgpt/), do not currently satisfy our [authorship criteria](https://www.nature.com/nature-portfolio/editorial-policies/authorship). Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
>
>
>
In a Jan 26, 2023 [editorial](https://doi.org/10.1126/science.adg7879) the Science journals announced an update to their [policy](https://www.science.org/content/page/science-journals-editorial-policies#authorship):
>
> Artificial intelligence (AI) policy: Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct.
>
>
>
On or before [Jan 26, 2023](https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers) Elsevier instituted a related policy. Their [publishing ethics page](https://www.elsevier.com/about/policies/publishing-ethics#) now contains:
>
> Where authors use AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key researcher tasks such as producing scientific insights, analyzing and interpreting data or drawing scientific conclusions. Applying the technology should be done with human oversight and control and authors should carefully review and edit the result, because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work.
>
>
> Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.
>
>
> Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author.
>
>
>
On Feb 17, 2013 <NAME> [announced](https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/) an update to their policies that:
>
> AI tools **must not** be listed as an author. Authors must, however, acknowledge all sources and contributors included in their work. Where AI tools are used, such use **must** be acknowledged and documented appropriately.
>
>
>
On Feb 21, 2023 PNAS [announced](https://www.pnas.org/post/update/pnas-policy-for-chatgpt-generative-ai) an update to their [policy](https://www.pnas.org/author-center/editorial-and-journal-policies#authorship-and-contributions):
>
> **Authorship** must be limited to those who have contributed substantially to the work. The corresponding author must have obtained permission from all authors for the submission of each version of the paper and for any change in authorship. Use of artificial intelligence (AI) software, such as ChatGPT, must be noted in the Materials and Methods (or Acknowledgments, if no Materials and Methods section is available) section of the manuscript and may not be listed as an author.
>
>
>
Similarly, Oxford University Press has the [policy](https://academic.oup.com/pages/authoring/journals/preparing_your_manuscript/ethics#Authorship):
>
> **Note:** Neither symbolic figures such as Camille Noûs nor natural language processing tools driven by artificial intelligence (AI) such as ChatGPT qualify as authors, and OUP will screen for them in author lists. The use of AI (for example, to help generate content, write code, or analyze data) must be disclosed both in cover letters to editors and in the Methods or Acknowledgements section of manuscripts.
>
>
>
This isn't even limited to journal publishers. In a Jan 31, 2023 [blog post](https://blog.arxiv.org/2023/01/31/arxiv-announces-new-policy-on-chatgpt-and-similar-tools/), arXiv announced a [policy](https://info.arxiv.org/help/moderation/index.html#policy-for-authors-use-of-generative-ai-language-tools) stating that they
>
> 1. continue to require authors to report in their work any significant use of sophisticated tools, such as instruments and software; we now include in particular text-to-text generative AI among those that should be reported consistent with subject standards for methodology.
> 2. remind all colleagues that by signing their name as an author of a paper, they each individually take full responsibility for all its contents, irrespective of how the contents were generated. If generative AI language tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
> 3. generative AI language tools should not be listed as an author; instead authors should refer to (1).
>
>
>
Upvotes: 4 [selected_answer]
|
2022/12/30
| 495
| 1,771
|
<issue_start>username_0: I am not one of the primary authors of the article, and there are more than 15 authors for the paper I am referring to.
How should I include the paper in my CV? Should all authors be cited? How can I neatly highlight my contribution without having to enlist all authors?
(This is only necessary, so I can be as economical as possible with the available space.)<issue_comment>username_1: If you have space constraints, one way I have seen this done is:
>
> First A., ... Plesca, A.-M., ... Senior A. (2022) "A title." *Journal*.
>
>
>
The first author lets someone find the full citation, and the senior author(s) contextualize whose lab(s) the work was performed in (if that's how your field uses last authors). If there are other important collaborators who don't fall in one of those roles, you can include them too.
If you are second or third (for example), you can list all authors until you get to yourself then insert the ellipsis.
Upvotes: 7 [selected_answer]<issue_comment>username_2: I tend to use the standard "Jones *et al.*, 2007" format, and then add a note to say it includes me. Something like this:
>
> Jones *et al.* (including username_2), 2007
>
>
>
Of course, even this isn't necessary. If you are listing publications in your own C.V., then you are claiming authorship by definition, so simply mentioning the Jones paper will be enough: you wouldn't be listing it if your name were not among the authors so you don't even really need to highlight it in any way.
Upvotes: 4 <issue_comment>username_3: I have also seen *i.a.* for Latin *inter alia*, meaning among others.
Something like:
>
> First Author, *i.a.* <NAME>. "A great Paper about something important", Science **3** (2022).
>
>
>
Upvotes: 3
|
2022/12/31
| 402
| 1,828
|
<issue_start>username_0: Several months ago I submitted an article to a journal with good reputation. Then I received two different reports. One was asking for minor revision. However, the second one overlooked at the content of the manuscript, and confused it with several previously known results. The editor, seeming the latter report, had to reject it (I would do the same in his position).
However, mistakes happen. The report of the second referee was completely wrong (from a math point of view, hence nothing subjective). I wrote a long reply to Editor's decision letter explaining point by point why the second referee was incorrect. This happened 1.5 months ago, I had a confirmation (from the editorial system) that the editor received the message, but I never received a reply.
Meanwhile, I just submitted it to another journal. But what should I do? Just ignore everything and go ahead? I mean, this is like getting a non-sense rejection..<issue_comment>username_1: Generally, it's your responsibility to write a paper that your readers understand. If a reviewer gets it wrong, the editor doesn't necessarily need to feel that it's the reviewer's fault. Remember that *the editor chose them* as someone they felt had the expertise to review your paper - if they didn't get the point, then possibly the paper needs clarification.
Since you've submitted to another journal, I think you should probably write again to the previous editor a short message clarifying that you're submitting somewhere else, to avoid any accusations of dual submission, and then move on.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I think what you experienced happens to all authors, sooner or later. Unfortunately, in my experience, an editor will ignore such emails unless you are known to the editor. Move on.
Upvotes: 2
|
2022/12/31
| 1,167
| 5,100
|
<issue_start>username_0: The company I work for is funding a project led by a professor at an engineering department in a US university. I was recently accepted by the university and they offered me a departmental fellowship for my first semester with subsequent semesters being funded by the advisor's grant money (which includes a stipend and full tuition). Also, my employer put me on a scholarship that covers my tuition and a very low stipend for the duration of the PhD even though they are also paying the advisor grant money for the project. I am currently in my first year in the PhD program working under the advisor on the project. I realized that the stipend I receive from my employer's scholarship is barely enough to pay for the rent and all the other students working on the same project are being paid an RA departmental standard stipend which is double that of my scholarship stipend. I brought this up indirectly to my advisor in the hopes that he will offer me an RA that at least augment my scholarship stipend to a livable wage similar to my peers (which is the department's policy) however he would give me vague answers and then complains that my company still hasn't paid their grant installment for this year/or how other students are having a hard time finding affordable housing..etc. I just feel like I am being treated like a bargaining chip, even though I am a student and bringing outside money that pays my tuition in full and half the departmental standard stipend. Should I be upfront with my advisor and ask/email him to augment my stipend and hire me as an RA just like my peers who are working on the same project? What if he refuses? To whom should I talk? I don't want to take out loans to compensate for the low stipend since I live in a high cost of living area.
I also checked the grant contract written by my company, it mentions no restriction in me receiving a stipend. I don't want to bring this up to my company management and make things sour between me and my advisor since we get along well and his advising style checks all my preferences.
Edit: The admission letter that I signed and accepted states the following:
"in view of your superior academic achievement and credentials, the \*\*\*\* Engineering department is pleased to offer you financial support for your graduate studies, contingent upon continued satisfactory academic performance. Our standard support package includes a minimum annual stipend of $\*\*\* (double my current stipend) plus the payment of tuition and fees, including the graduate student health insurance plan."
Then the letter goes on say that" the department supports students for the first semester ,followed by support in the form of a Research Assistantship from the contracts/grants of the student’s advisor."<issue_comment>username_1: Whether other students are having a hard time finding affordable housing has nothing to do with your financial situation. You need to make it clear that this is about your ability to sustain yourself in the program.
Possibly, there are other means to finance your PhD. Can you do some teaching and get extra funding from the department?
Also, why hasn't the company paid the instalment? Is your professor not asking them to pay?
Upvotes: 1 <issue_comment>username_2: I don't see any right to be paid similarly to your peers financed by different programs.
I did my PhD with a scholarship and my colleagues were paid way much higher because they applied for more competitive positions and had to work hard to accomplish other tasks assigned to these positions. I did not apply for those positions because I prefer to focus on my PhD. It is just fair!
If you want to have the same wage, simply apply for these positions (or what you called RA departmental standard stipend).
Of course, it is completely legitimate to ask your company/professor to raise the stipend but without claiming this right which is based on an unfair comparison.
Upvotes: 2 <issue_comment>username_3: I would approach the department financial admin or the director of graduate studies for clarification on your situation. It seems there has been some oversight and you are not being paid the correct stipend.
Your advisor may be vague because he doesn’t have a good grasp on the situation. Many advisors have only a vague understanding about policies for students who have multiple streams of money (here, it seems to be company stipend+department tuition remission+something from advisor funds but that seems unclear). You need to speak to others who may have a clearer idea. To be politic, I would probably mention to the advisor that there seems to be an issue with your stipend not meeting the department minimum, and you are going to investigate where the confusion lies.
One hypothesis might be that you are actually receiving the correct amount from all university sources including your advisor, but since your company has not paid their installment you are not receiving the company scholarship and this may need to be paid to you in a lump sum once the funds come through.
Upvotes: 0
|
2022/12/31
| 1,350
| 5,910
|
<issue_start>username_0: Of course, they may be looking up something related to the lecture content, but I can usually spot that dead-eye TikTok coma a mile off. I get distracted because I am genuinely upset that they’re wasting time, money and opportunity.
I initially thought I was just taking it too personally and that my ego was just taking a hammering, but it is much more about feeling sorry for them and their struggle with the addiction. Either way, it stops my flow as I find myself having to switch mindsets to tackle the situation, which then becomes a total distraction for the rest of the class. I do remind students at the beginning of each lecture that social media is a distraction, and encourage all to give themselves a break from it during their lecture time, but some (especially if they have come late to the lecture!) are just too locked in the tractor-beam.
I consider my lectures as interesting, exciting, full of humour and carefully curated AV, but, with some students, I feel I’m losing the battle for their attention. I’m keen to learn of strategies for coping with the aforementioned mindset switching, so any suggestions are welcome.
By the way, I can’t put a ban on phone use as the Uni has just issued an edict saying it’s ok for students to take notes on them.<issue_comment>username_1: At university level we are teaching adults (well in many legal jurisdictions over 18 years is considered fully adult) so when teaching we are in less of a position to attempt to correct a learners behaviour, than one might in say elementary school.
The students are thus responsible for their own behaviour. How a student's actions affects us as an individual is something we also should have some degree of control over. If something a student does only impact us then we need to consider what we can do to limit the impact. However, if something a student does affects several other learners in the class in a way that impairs their learning then the teacher will need to find a way to reduce that impact on the class learning.
How to reduce the impact on the class learning can be achieved in several ways depending on the nature of the action; in other words whether it is chronic and ongoing or acute and immediate. For something that is chronic and ongoing speaking to the individual student outside of class can often be an appropriate action as it does not disrupt the class more than the irritating behaviour. When something is more acute and impacts several students then halting the flow of teaching and commenting may be better.
Personally, I often find class peer "pressure" the most effective in many cases. I just stop speaking and stare at the offender (I "freeze"). Several students then "nudge" the offender who becomes to realise that others in the class had a negative reaction to what they were doing and both stop and also do not repeat.
I also find that directly giving students a list of things to do and not to do is counter productive at an adult level, as it implies you view them as perhaps not capable of adult behaviour. Mutual respect is something that works much better at encouraging learning in a university context. They need to know I like to teach them
this topic and can get back similar vibes back from them in some ephemeral way.
Upvotes: 3 <issue_comment>username_2: When I have guest lecturers on my course, I often sit at the back. It's a tiered lecture theatre and you can see what's on the students' laptop screens. For a minority, there is usually some social media going on. Even when the lecturer is brilliant.
I have not done this, but I like the idea of applying social pressure. Do a poll in your next class. Say you have noticed this happening and want their views -is it distracting to them and should you have a rule to ban it? If they vote yes, then that in itself should reduce it, particularly if you point out that the student behind you will usually know if you are being rude and breaking the new rule. And if they vote no, then that hopefully gives you some reassurance that they are not as bothered about it as you might think, allowing you to focus on the lecture without being distracted by the impact social media is having.
Upvotes: 0 <issue_comment>username_3: It is an unreasonable expectation that everyone in your audience will pay complete attention to you all the time. Have a look around at the audience at a conference, and you will see that even(?) professors will not always pay attention, despite having paid to be at the conference, considerations of professional courtesy, and an interest to actually learn something. You don't have nearly enough information to figure out what is going with a student during your lecture, and that is not where your focus should be anyway.
While giving a lecture, focus on those students paying attention. Anything else with make you feel bad (as you notice) and probably be detrimental to your teaching and the experience of the students that do pay attention, without actually accomplishing anything.
You are already working hard to make it easy to pay attention. Keep that up. If you notice excessive amounts of non-attention, try to figure out what is wrong - but a student or two looking at their smartphone is normal.
There might be students who genuinely need some help in paying attention to their lectures and not to TikTok. But that support is probably be handled better in some kind of mentoring meeting rather than public shaming or other ad-hoc interventions during a lecture. In particular, you will want to make sure to actually know what is going on with the student before jumping to conclusions. (Just to offer a scenario: a student is obsessively checking their messages during a lecture because a loved-one is in hospital in critical condition. You call them out for browsing on TikTok or whatever. Horrible experience for everyone.)
Upvotes: 2
|
2022/12/31
| 1,028
| 4,695
|
<issue_start>username_0: What does it mean when the posting says they are looking for an associate or full professor? Does this mean that only people that are currently in tenure-track positions will be considered? Would the search committee consider applications from non tenure-track applicants?
Any advice would be much appreciated!<issue_comment>username_1: It could mean a person currently on tenure track, but it might mean, and is more likely to mean, someone who is already an associate or full professor elsewhere. Someone eligible for a mid level or senior position.
If you aren't at least an assistant professor your resumé would need to be pretty extraordinary to be considered. Someone widely known and respected in your field might be considered but the bar would be pretty high. After all, the job is about research and teaching, which are, perhaps, more likely to be skills of those who are already academics.
In particular, a recent graduate with a new doctorate would have almost no chance of being considered unless the dissertation were a solution to a long standing and very important question. It can happen, but vanishingly rare.
Upvotes: 3 <issue_comment>username_2: Sometimes people who are employed outside of academia, say in senior industry positions or at research labs, may be hired as associate professors or even full professors if they have impressive enough publication records. There is no requirement to already be an associate or full professor, or to already have an academic job of any sort.
It’s also not uncommon for people who are assistant professors and are just about to be considered for promotion to associate professor at their own institution to also apply for associate professor jobs elsewhere. If they are at a career stage where their own institution would consider them for promotion, other universities would likely be willing to consider them for such a position as well.
In rare cases, someone may succeed in being hired for a tenured position as an associate professor directly from a postdoc position, skipping the tenure track phase. Needless to say, you’d have to have some pretty stellar achievements under your belt for this to be a realistic possibility.
The only hard criterion for being hired as an associate professor is that you must be impressive enough that people will want to hire you as an associate professor.
Upvotes: 5 <issue_comment>username_3: Have a look at the corresponding selection criteria. They will spell out who they want. For example, I have seen full professor positions going to people with excellent industry connections with very few publications. In another example, I have seen universities set a minimum requirement in terms of citations before one can apply. It all depends on the strategic plan of a university.
Upvotes: 2 <issue_comment>username_4: The locale matters a lot here as well. Germany, until rather recently, basically did not even *have* tenure tracks or assistant professors. Instead, people worked as postdocs or in other academic or industrial positions until they were sufficiently experienced to apply as professor. With the introduction of "Juniorprofessor" positions this is slowly changing, but I would imagine that even today most people hired as professor have never been a Juniorprofessor in their life.
In other countries, where the assistant-associate-full professor chain is a more well established it may be less common to "jump ranks", but I doubt it's strictly *impossible* anywhere.
Upvotes: 2 <issue_comment>username_5: In academic hiring in the US there are several different dimensions that matter including: tenure status, salary, and rank. There are also many different kinds of institutions, so what I am going to say means that it won't apply everywhere.
Normally, where I teach, a position that is advertised *specifically* for an Associate or Full Professor would be for someone who would be expected to take a leadership role in the department, either as chair, heading a center or institute, or creating a new program. Most of the time such a person is already at that rank and they would already have tenure and that tenure would transfer (there are different mechanisms for this).
As other people have mentioned, in some departments, such as computer science or business sometimes someone with a lot of industry experience usually also including a publication record will also be hired at those ranks because the assistant professor salaries are not competitive and don't reflect their seniority. However those people would be hired without an award of tenure.
Overall, the advice to read the advertisement carefully is really good.
Upvotes: 1
|
2022/12/31
| 667
| 2,885
|
<issue_start>username_0: I have code I wrote to replicate a simulation study by another author. Is it okay to publicly upload this in my code repository?
This will include:
* code (my work, based on the author’s description in the study)
* replicated table (values are similar to original)
* original table (for comparison purposes, with citation)<issue_comment>username_1: Yes, it is fine to do that as long as you acknowledge the original paper.
Upvotes: 2 <issue_comment>username_2: Basic copyright rule applies. If you wrote the code yourself, then no problem.
Upvotes: 2 <issue_comment>username_3: You ask about uploading three different things, so it's worth answering with regard to the three parts.
1. **Your own code**: Definitely acceptable to upload it. You wrote
the code so
the code is your original work and you would own the copyright in
the code. The reasons for citing the original work in the context of
your own code would be two-fold: (a) to put your own code in context,
and (b) to acknowledge an academic debt to the original author. In
terms of copyright, however, I see no *legal requirement* to cite
the original paper.
2. **Your table of results**: Definitely acceptable to upload it. This is
your original work, however closely your results approximate the
original. The reasons for citation of the original work, in the
context of your table of results, are exactly as in point 1.
3. **Original table, for comparison purposes, with citation**: From the point
of view of academic ethics, this would look quite acceptable. However,
the copyright issue is less clear and will depend upon the jurisdiction
of the place where you are publishing, and *possibly also*, on the place
where the original work was published. The notion of "fair use" in
copyright works varies very substantially from jurisdiction to
jurisdiction. If the original table (i.e., the other author's original
work) is more than a few lines long, I *strongly recommend* that you
seek permission to publish it. Just have a look at "permission
requests" on the website of the original journal.
Upvotes: 6 [selected_answer]<issue_comment>username_4: Other answers here seem to focus on the issue of copyright.
I will add that it is also a good idea to publish your code and results in the interest of documenting replication of the original study. This is an important aspect of science, being able to replicate/reproduce the results of others, as it helps validate scientific findings.
In this context, I think it is also important to cite the original work to help document your replication of their work. Actually, you could even consider publishing your replication in a journal that focuses on this important aspect which is usually ignored by traditional journals in favour of new results. One such journal is [Rescience C](https://en.wikipedia.org/wiki/ReScience_C).
Upvotes: 4
|
2023/01/01
| 1,917
| 8,353
|
<issue_start>username_0: I would wish to have some summary advice for the specific scenario of looking for a new postdoc position and trying to avoid ending up with a terrible supervisor. Let me narrow this question down by saying that the most important thing to avoid would be an unsupportive/pressurizing/controlling supervisor, while competence is much less important. As long as the supervisor is really nice and lenient ("laissez faire"), the rest does not matter. Then again, the real nightmare is of course when the supervisor is both extremely controlling and incompetent. (A controlling supervisor is not that bad if at least they do not pressure employees into crazy and pointless tasks/projects.) The field is experimental psychology (in case that matters).
So far I have just two ideas, but both are far from perfect.
(A) Check how they performed during their first 3-5 years in academia (i.e., from their PhD start), since high output (especially high-quality first-authored articles) during that time is I think a strong indication of competence. (Publications/achievements afterwards are always questionable as people for whatever reasons rising in ranks have increasingly more power and opportunities for potential swindling and extortion.) However, the problem with this is that it is only useful in case of high output, since low output does not necessarily mean incompetence (e.g., they may have had a terrible supervisor themselves), and it has in any case very little to do with being just a nice and decent person.
(B) Ask current and former employees. However, the problem with this is that these people, especially in case of a really terrible supervisor, may be extremely intimidated and/or even brainwashed. This is especially true for current employees, but, in case of former employees, they might simply be happy to have done with the place and might not want to get involved. Also, some may have become accomplices. In general, no one has a strong incentive for telling the truth (to a helpless stranger).<issue_comment>username_1: You have a difficult problem for many reasons. One is that you probably don't have dozens of opportunities to choose from so may need to take a less than perfect position. Another is that it is difficult to judge from afar, even with feedback from "former employees" whose needs may be different from yours, along with the honesty issue.
It isn't easy to manage, but if you can make face to face contact with some of the professors current students and postdocs you can get a sense. "How is X to work with?" is a useful question, but only if you can see the immediate reaction, including facial expressions. That would give you a hint about difficult-to-work-with supervisors, but the actual answers, if informal and off-the-cuff, might give some indication of how helpful or standoffish they tend to be.
You need to judge for yourself what level of interaction you want in a supervisor and the experiences of others can give you some hint about that, though imperfect. But email exchanges or phone calls won't help much and might give misinformation.
You could, perhaps, sit in on a seminar that the professor leads with current doctoral students an/or postdocs. Just watch for interactions and reactions from everyone.
I think your "A" option is probably not going to tell you much. How they did and how they interact with people might not be highly correlated. There are also reasons for low productivity initially in some (many?) fields, such as getting a new research program started. Too much noise in the data, I'd guess.
But, I don't think there is nay real substitute for on scene observation over a couple of days. Hard to manage, I realize.
Upvotes: 3 <issue_comment>username_2: After you had your in-person interview with the potential supervisor ask if you could **talk to some people in the lab**. Maybe go for a coffee with someone and just ask them what they think. Of course they will not tell you that this supervisor is terrible but from a lack of excitement and/or sheepish smile you should be able to tell (you are working in psychology after all!).
Upvotes: 5 <issue_comment>username_3: I have always believed that, for larger, established labs, the by far best metric for overall supervision and work environment quality is looking at the success (or lack thereof) of previous holders of similar positions. Have previous postdocs gone on to good careers (in academia, if that's what you are looking for)? Has the publication track record of previous postdocs improved after taking the position, relative to their PhD work?
Some particular red flags I would watch out for:
* Multiple postdocs leaving for industry in less time than the typical postdoc duration in your area (people leaving to faculty positions is no red flag at all, but people quitting their postdoc to go to industry may be indicative that people lost interest in academia when working with that advisor).
* Promising candidates not publishing after joining this lab.
* Promising candidates publishing much more shoddy work than during their PhD.
* Candidates moving internally to working with other faculty in the same university (this is possible in some universities, but often indicative of some sort of fallout).
All of these things are visible, or can at least be guessed, from the outside with some detective work. Sure, you don't really know what's going on specifically, but do you really need to know if an advisor is incompetent, a micromanager, has poor leadership skills, or if postdocs are burned out in grunt work?
Upvotes: 4 <issue_comment>username_4: Here are some questions you can ask current/former employees.
About the supervisor:
* Is he more hands-on or more "laissez-faire"?
* How often did you meet with them/how often did you need to report to them?
* Would you collaborate with them again?
* Can you tell me about the process for your last paper? (Notably how/when the PI was involved)
About the lab:
* What is the profile best suited to succeed in the lab? *(Anything along the lines of "must be able to bear pressure" is a clear red flag for your question)*
* How often did you collaborate with other students/labs? *(It will give an idea of the degree of autonomy)*
* What were the authorship rules in the lab? *(Sometimes, PIs explicitly define that their name must be added to absolutely all papers, even those where they have no expertise, and that is imo a red flag)*
An other indicators can be the diversity of the lab: are there many women? people from other countries? people with young children? Are they publishing/supported as much as other people at similar career stages?
Upvotes: 3 <issue_comment>username_5: If you have a specific supervisor in mind, you could try asking some of that supervisor's current students what working with them is like. If they give mostly negative accounts, then it's likely that the supervisor would not be a good fit. However, I understand that this is especially difficult if you're not surrounded by people in your lab before you move to the institution at which you will be doing a PhD.
It could also be a good idea to find out where that supervisor's students ended up going after their PhD finished. If most of them decide to quit academia altogether after their PhD, that could be a red flag in itself - a PhD is a challenging endeavour even when the supervisor is good, but if people are being supervised by a particular member of faculty and then thinking "screw this" when they reach the end, then it could be a sign that that particular supervisor led them to the conclusion that academia wasn't for them. A PhD should be challenging because of the nature of a PhD, and shouldn't be unfairly soiled by the person you are being supervised by.
A valuable piece of advice I was given when choosing PhD programmes is that the relationship you have with your PhD supervisor is arguably the most important factor in your success, perhaps more so than how well your interests align. Obviously there's no point choosing a supervisor who you get on really well with if your interests don't line up, but just remember that people are multi-faceted and that just because your research interests might line up perfectly with a certain supervisor, it doesn't mean you'll get on well in all aspects.
Upvotes: 2
|