text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: .Net Multithreading: SQL ConnectionPool In a VB.Net Windows Service I'm currently pooling units of work with:
ThreadPool.QueueUserWorkItem(operation, nextQueueID)
In each unit of work (or thread I'll use for ease of understanding), it will make a couple MSSQL operations like so:
Using sqlcmd As New SqlCommand("", New SqlConnection(ConnString))
With sqlcmd
.CommandType = CommandType.Text
.CommandText = "UPDATE [some table]"
.Parameters.Add("@ID", SqlDbType.Int).Value = msgID
.Connection.Open()
.ExecuteNonQuery()
.Connection.Close() 'Found connections not closed quick enough'
End With
End Using
When running a netstat -a -o on the server I'm seeing about 50 connections to SQL server sitting on IDLE or ESTABLISHED, this seems excessive to me especially since we have much larger Web Applications that get by with 5-10 connections.
The connection string is global to the application (doesn't change), and has Pooling=true defined as well.
Now will each of these threads have their own ConnectionPool, or is there one ConnectionPool for the entire .EXE process?
A: From the MS Docs -
"Connections are pooled per process, per application domain, per connection string and when integrated security is used, per Windows identity"
http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
Are you experiencing errors such as -
Exception Details: System.InvalidOperationException: Timeout expired. The timeout
period elapsed prior to obtaining a connection from the pool. This may have occurred
because all pooled connections were in use and max pool size was reached.
Also how many work items are being queued in the service?
A: One big problem with your code is that you aren't closing your connection if ExecuteNonQuery throws an exception. Disposing the SqlCommand is not enough, you need to also dispose the SqlConnection when an exception is thrown, something like:
Using SqlConnection connection = New SqlConnection(ConnString)
Using sqlcmd As New SqlCommand("", connection)
With sqlcmd
... etc
End With
End Using
End Using
A: Although I generally like the using statement, I find that sometimes in the .NET libraries the actual CLOSE of a handle isnt done until garbage collection. So being old school and religious about such things, I add an explicit close at the end of my using statement. This is pseduo code since I generally code in C# rather then VB.NET but it should give you the idea.
Using SqlConnection connection = New SqlConnection(ConnString)
TRY
Using sqlcmd As New SqlCommand("", connection)
With sqlcmd
... etc
End With
End Using
FINALLY
connection.Close()
End Using
A: If the number of open connections offends you, take control in the connection string
Notice: MinPoolSize and MaxPoolSize.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Read binary file as string in Ruby I need an easy way to take a tar file and convert it into a string (and vice versa). Is there a way to do this in Ruby? My best attempt was this:
file = File.open("path-to-file.tar.gz")
contents = ""
file.each {|line|
contents << line
}
I thought that would be enough to convert it to a string, but then when I try to write it back out like this...
newFile = File.open("test.tar.gz", "w")
newFile.write(contents)
It isn't the same file. Doing ls -l shows the files are of different sizes, although they are pretty close (and opening the file reveals most of the contents intact). Is there a small mistake I'm making or an entirely different (but workable) way to accomplish this?
A: You can probably encode the tar file in Base64. Base 64 will give you a pure ASCII representation of the file that you can store in a plain text file. Then you can retrieve the tar file by decoding the text back.
You do something like:
require 'base64'
file_contents = Base64.encode64(tar_file_data)
Have look at the Base64 Rubydocs to get a better idea.
A: First, you should open the file as a binary file. Then you can read the entire file in, in one command.
file = File.open("path-to-file.tar.gz", "rb")
contents = file.read
That will get you the entire file in a string.
After that, you probably want to file.close. If you don’t do that, file won’t be closed until it is garbage-collected, so it would be a slight waste of system resources while it is open.
A: Ruby 1.9+ has IO.binread (see @bardzo's answer) and also supports passing the encoding as an option to IO.read:
*
*Ruby 1.9
data = File.read(name, {:encoding => 'BINARY'})
*Ruby 2+
data = File.read(name, encoding: 'BINARY')
(Note in both cases that 'BINARY' is an alias for 'ASCII-8BIT'.)
A: If you need binary mode, you'll need to do it the hard way:
s = File.open(filename, 'rb') { |f| f.read }
If not, shorter and sweeter is:
s = IO.read(filename)
A: how about some open/close safety.
string = File.open('file.txt', 'rb') { |file| file.read }
A: Ruby have binary reading
data = IO.binread(path/filaname)
or if less than Ruby 1.9.2
data = IO.read(path/file)
A: on os x these are the same for me... could this maybe be extra "\r" in windows?
in any case you may be better of with:
contents = File.read("e.tgz")
newFile = File.open("ee.tgz", "w")
newFile.write(contents)
A: To avoid leaving the file open, it is best to pass a block to File.open. This way, the file will be closed after the block executes.
contents = File.open('path-to-file.tar.gz', 'rb') { |f| f.read }
A: If you can encode the tar file by Base64 (and storing it in a plain text file) you can use
File.open("my_tar.txt").each {|line| puts line}
or
File.new("name_file.txt", "r").each {|line| puts line}
to print each (text) line in the cmd.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "268"
}
|
Q: How can I provide custom error messages using JAXP DocumentBuilder? I want to provide my own message from the validation done in DocumentBuilder, rather than the one from XMLMessages.properties.
Now I see that a property error-reporter needs to be set to a class which extends XMLErrorReporter.
However, I've not been able to get ComponentManager from Document/Builder/Factory.
Doing parsing of string in SAXParseException is the last option, but I'm just thinking there may be a 'best practice' way of doing it.
A: have you already looked at DocumentBuilder#setErrorHandler?
if yes, could you explain why this approach doesn't work for you?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Planning to use PostgreSQL with ASP.NET: bad idea? I'm currently planning the infrastructure for my future web project. I want to go the way Joel went with having one DB per client and now thinking which DB engine will be good for me. The best would be of course SQL Server, but I can't afford a full-blown version at this moment and I don't think SQL Server Express will be a good choice for the loaded service. Now I'm thinking of using PostgreSQL instead. Given that my development environment will be ASP.NET 3.5 with say NHibernate or LINQ to SQL, how much trouble will I have if I use PostgreSQL instead of SQL Server?
Thanks!
A: I don't think it is a bad idea, but a great experience.
By the way NHibernate is the way to go Linq to Nhibernate is under heavy development and available in the trunk so if you do care "which I don't care" about Linq don't be scare to use it.
A: Why not start with SQL Server Express and migrate when you have the money? That way you can move toward what you consider ideal and reduce conversion costs.
A: NHibernate works OK with PostgreSQL (whether the db is on Windows or UNIX-like OSes) and .NET works well with it using the Npgsql db provider.
The only "trouble" you'll get is of course PostgreSQL doesn't do T-SQL. In fact its PL/pgSQL stored proc language is closer to Oracle's PL/SQL than it is to MS SQL Server's T-SQL. So you'll have to recode your stored procs, and there will be some gotchas to watch out for if you do ADO.NET. If you use NHibernate, you probably won't have to worry much about that. No LINQ to SQL though, so tough luck for you.
PostgreSQL is scalable and works OK now with Windows (earlier versions didn't support Windows formally), and pgAdmin is a good management tool for it, you'll be able to do most of the stuff you can do with SQL Server's GUI tools with it in a short time.
A: If you go with PostgreSQL you won't be able to use LINQ to SQL. Currently LINQ only works with SQL Server (possibly Oracle). I'm not sure about NHibernate. Also, if you use PostgreSQL, last time I checked, they had dropped windows support. So you'll be looking into having a second box running Linux for the DB.
[EDIT]
It turns out PostgreSQL is supported on windows. I can't recall where I saw support being cancelled. Anyway, I've heard it runs better on Linux anyway, so you might want to look into doing that regardless.
A: These days,postgres works really fast with .net and it is as good or even better than the proprietary mssql
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: An affordable tool for DB modeling Could you guys recommend some affordable SQL modeling tool which supports SQL Server, PostgreSQL and MySQL? I'm looking into up to $300 per license range. One tool per answer, please!
Thanks!
A: In addition to Microsoft Visio and SQL Server Database Diagrams,
one tool not yet mentioned is EA Architect which can be purchased for under US$200.00
A: Keep an eye on SchemaBank, they are now at beta quality: just visual drawing of entity relationship, forward-engineering into SQL statements, versioning and simple sharing of your schema to others. And they seem to offer free accounts. For MySQL and PG only though.
Guess web-based database modeling is about to take off soon? They are SaaS vendor.
A: In fact, the most comprehensive list on database modeling / design tool would be from wikipedia.
A: ERwin has always been my favorite data modeling tool.
A: Some people I've worked with have had good things to say about TOAD - it works with Mysql, though not PostgreSQL. It is a free download and includes graphical modeling support.
A: This site has a run down of some of the tools available: http://www.databaseanswers.com/modelling_tools.htm.
I tried Dezign by Datanamic, which the site recommended for the individual user. It was pretty easy to use and has lots of features. It supports about 20 different databases to various degrees.
It has a free trial and the license is $245 for the Standard edition (not $139 as listed on the site). I ended up not buying it because I don't have any budget for tools but I would have if I did. If you need multiple licenses, the 5 and 10 packs offer a significant discount over the single-license fee.
A: Sybase PowerDesigner is best and most universal database tool i ever saw, but i cant get information about pricing, because "Your country is not currently supported for purchasing online.".
It supports more than 50 databases, including MySQL, Oracle, MSSQL and a lot more. Im not sure about PostgreSQL, but it should be supported.
More information here: http://www.sybase.com/products/modelingdevelopment/powerdesigner
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: ASP.net MVC HomeController URL By default the MVC Preview 5 web project comes with a HomeController.vb class. This sets the web URL to http://www.mywebsite.com/home/ by default. If I just wanted this to be http://www.mywebsite.com/ by default, how would I accomplish that?
A: Answered already so I'm just going to direct you to How do I get rid of Home in ASP.Net MVC?.
Users with 10k+ rep can also refer to https://stackoverflow.com/questions/33861/aspnet-mvc-routing-basics-root-route (deleted)
A: I'm not sure I understand your question, if what you want is to go to http://www.mywebsite.com/ and not have it be trailed by /home, that is the behavior you will get.
Is there something else you were looking for?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/130997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What's the SQL query to list all rows that have 2 column sub-rows as duplicates? I have a table that has redundant data and I'm trying to identify all rows that have duplicate sub-rows (for lack of a better word). By sub-rows I mean considering COL1 and COL2 only.
So let's say I have something like this:
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
aa 112 blah_m
ab 111 blah_s
bb 112 blah_d
bb 112 blah_d
cc 112 blah_w
cc 113 blah_p
I need a SQL query that returns this:
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
bb 112 blah_d
bb 112 blah_d
A: Does this work for you?
select t.* from table t
left join ( select col1, col2, count(*) as count from table group by col1, col2 ) c on t.col1=c.col1 and t.col2=c.col2
where c.count > 1
A: With the data you have listed, your query is not possible. The data on rows 5 & 6 is not distinct within itself.
Assuming that your table is named 'quux', if you start with something like this:
SELECT a.COL1, a.COL2, a.COL3
FROM quux a, quux b
WHERE a.COL1 = b.COL1 AND a.COL2 = b.COL2 AND a.COL3 <> b.COL3
ORDER BY a.COL1, a.COL2
You'll end up with this answer:
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
That's because rows 5 & 6 have the same values for COL3. Any query that returns both rows 5 & 6 will also return duplicates of ALL of the rows in this dataset.
On the other hand, if you have a primary key (ID), then you can use this query instead:
SELECT a.COL1, a.COL2, a.COL3
FROM quux a, quux b
WHERE a.COL1 = b.COL1 AND a.COL2 = b.COL2 AND a.ID <> b.ID
ORDER BY a.COL1, a.COL2
[Edited to simplify the WHERE clause]
And you'll get the results you want:
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
bb 112 blah_d
bb 112 blah_d
I just tested this on SQL Server 2000, but you should see the same results on any modern SQL database.
blorgbeard proved me wrong -- good for him!
A: Join on yourself like this:
SELECT a.col3, b.col3, a.col1, a.col2
FROM tablename a, tablename b
WHERE a.col1 = b.col1 AND a.col2 = b.col2 AND a.col3 != b.col3
If you're using postgresql, you can use the oid to make it return less duplicated results, like this:
SELECT a.col3, b.col3, a.col1, a.col2
FROM tablename a, tablename b
WHERE a.col1 = b.col1 AND a.col2 = b.col2 AND a.col3 != b.col3
AND a.oid < b.oid
A: Don't have a database handy to test this, but I think it should work...
select
*
from
theTable
where
col1 in
(
select
col1
from
theTable
group by
col1||col2
having
count(col1||col2) > 1
)
A: My naive attempt would be
select a.*, b.* from table a, table b where a.col1 = b.col1 and a.col2 = b.col2 and a.col3 != b.col3;
but that would return all the rows twice. I'm not sure how you'd restrict it to just returning them once. Maybe if there was a primary key, you could add "and a.pkey < b.pkey".
Like I said, that's not elegant and there is probably a better way to to do this.
A: Something like this should work:
SELECT a.COL1, a.COL2, a.COL3
FROM YourTable a
JOIN YourTable b ON b.COL1 = a.COL1 AND b.COL2 = a.COL2 AND b.COL3 <> a.COL3
In general, the JOIN clause should include every column that you're considering to be part of a "duplicate" (COL1 and COL2 in this case), and at least one column (or as many as it takes) to eliminate a row joining to itself (COL3, in this case).
A: This is pretty similar to the self-join, except it will not have the duplicates.
select COL1,COL2,COL3
from theTable a
where exists (select 'x'
from theTable b
where a.col1=b.col1
and a.col2=b.col2
and a.col3<>b.col3)
order by col1,col2,col3
A: Here is how you find duplicates. Tested in oracle 10g with your data.
select * from tst
where (col1, col2) in
(select col1, col2 from tst group by col1, col2 having count(*) > 1)
A: select COL1,COL2,COL3
from table
group by COL1,COL2,COL3
having count(*)>1
A: Forget joins -- use an analytic function:
select col1, col2, col3
from
(
select col1, col2, col3, count(*) over (partition by col1, col2) rows_per_col1_col2
from table
)
where rows_per_col1_col2 > 1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Securing DB and session-data on a PHP shared host I wrote a PHP web-application using SQLite and sessions stored on filesystem.
This is functionally fine and attractively low maintenance. But, now it needs to run on a shared host.
All web-applications on the shared host run as the same user, so my users' session data is vulnerable, as is the database, code, etc.
Many recommend storing sessions in DBMS such as MySQL in this situation. So at first I thought I will just do that, and move the SQLite data into MySQL too. But then I realized the MySQL credentials need to be readable by the web application user, so I'm back to square one.
I think the best solution is to use PHP as a CGI so it runs as different user for each web-application. This sounds great, but my host does not do this it uses mod_php. Are there any drawbacks from an admin's point-of-view for enabling this? (performance, backward compatibility, etc)? If not then I will ask them to enable this.
Otherwise, is there anything I can do to secure my database and session data in this situation?
A: As long as your code is running as the shared web user, anything stored on the server is going to be vulnerable. Any other user could write a PHP script to examine any readable file on the server, including your data and PHP code.
If your hosting provider will allow it, running as PHP as a CGI under a different user will help, but I expect there will be a significant performance hit, as each request will require a new process to be created. (You could look at FCGI as a better-performing alternative.)
The other approach would be to set a cookie based on something the user provides, and use that to encrypt session data. For instance, when the user logs in, take a hash of their username, password (as just supplied by them) and the current time, encrypt the session data with the hash, set a cookie containing the hash. On the next request, you'll get the cookie back, which you can then use to decrypt the session data. Note however that this will only protect the current session data; your user table, other data, and code will still be vulnerable.
In this situation, you need to decide whether the tradeoff of the low cost of shared hosting is acceptable considering the reduced security it provides. This will depend on your application, and it may be that rather than trying to come up with a complex (and possibly not even very effective) way to add security, you're better off just accepting the risk.
A: I don't view security as all or nothing. There are steps you can take. Give the web db user only the permissions it needs. Store passwords as hashes. Use openid login so users provide their credentials over SSL.
PHP on cgi can be slower and some hosts may simply not want to support more than one environment.
You may need to stick with your host for some reason, but generally there are so many available that it is a good reminder for people to compare functionality and security as well as cost. I have noticed many companies starting to offer virtual machine hosting -- nearly dedicated server level security in terms of isolating your code from other users -- at what is to me reasonable cost.
A: A shared host is no way to run a web site if you are conscious about privacy and security of your data from the sites that you share the server with. Anything accessible to your web application is fair game for the others; it'll only be a matter of time before they can access it (assuming they do have incentive to do that to you).
A: "you can place your DB connection variables in a file below the web root. this will at least protect it from web access. if you're going to use file based sessions as well, you can set the session path in your user's directory and again outside the web root."
I don't have an account so I can't downvote that.. but seriously it is not even relevant to the question.
Duh you store stuff outside the webroot. That goes for any hosting scenario and is not specific to shared hosting. We're not talking about protecting from outsiders here. We're talking about protecting from other applications on the same machine.
To the OP I think PHP as CGI is the most secure solution, as you already suggested yourself. But as someone else said there is a performance hit with this.
Something you might look at is moving your sessions and db to MySQL and using safe_mode and/or open_basedir.
A: I would solve the problem with a infrasturcture change instead of a code one.
Consider upgrading to a VPS server. Nowdays you can get them very inexpensive. I've seen VPS's starting @ 10$/mo.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Migrations for Java I use both ruby on rails and Java. I really enjoy using migrations when I am working on a rails project. so I am wondering is there a migrations like tool for Java? If there is no such tool is it a good idea to use migrations as a tool to control a database used by a Java project?
A: I've used Hibernate's SchemaUpdate to perform the same function as migrations. It's actually easier than migrations because every time you start up your app, it examines the database structure and syncs it up with your mappings so there's no extra rake:db:migrate step and your app can never be out of sync with the database it's running against. Hibernate mapping files are no more complex than Rails migrations so even if you didn't use Hibernate in the app, you could take advantage of it. The downside is that it's not as flexible as far as rolling back, migrating down, running DML statements. As pointed out in the comments, it also doesn't drop tables or columns. I run a separate method to do those manually as part of the Hibernate initialization process.
I don't see why you couldn't use Rails migrations though - as long as you don't mind installing the stack (Ruby, Rake, Rails), you wouldn't have to touch your app.
A: For a feature comparison between
*
*Flyway
*Liquibase
*c5-db-migration
*dbdeploy
*mybatis
*MIGRATEdb
*migrate4j
*dbmaintain
*AutoPatch
have a look at http://flywaydb.org
This should be a good start for you and anyone else to select the right tool for the job
A: There are also two independent implementations of rails-like migrations for Java:
1) Maven-based migrations from Carbon Five
2) Ant-based tasks from Hashrocket (my personal favorite)
Although these packages were written for Maven and Ant specifically, with some work you can adapt them to just about anything.
A: I ran across this post while researching the same question. I haven't come to any conclusions about the best tool or approach yet, but one tool that I've come across which hasn't been mentioned in other answers so far is dbdeploy. I'd be interested to read any comparisons of these tools.
Some other relevant resources: Martin Fowler and Pramod Sadalage's somewhat aged post on Evolutionary Database Design, and the book Refactoring Databases: Evolutionary Database Design by Sadalage and Scot Ambler.
A: Migrate4j seems like a candidate, but the project doesn't look mature enough for production usage.
A: There is also DbMaintain which has been initially developed inside Unitils but is now a dedicated project. We are currently using it and are very satisfied (which doesn't mean there aren't any good alternatives). I list more of them in my database+migration bookmarks (with a focus on tools supporting Maven).
A: Liquibase is another project in this domain worth checking out.
A: Grails has a dbmigrate utility that is patterned after the one from Rails. Since it's implemented in Groovy, you should be able to use it from any of your Java projects.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86"
}
|
Q: Backgroundrb scheduled task ending I have a backroundrb scheduled task that takes quite a long time to run. However it seems that the process is ending after only 2.5 minutes.
My background.yml file:
:schedules:
:named_worker:
:task_name:
:trigger_args: 0 0 12 * * * *
:data: input_data
I have zero activity on the server when the process is running. (Meaning I am the only one on the server watching the log files do their thing until the process suddenly stops.)
Any ideas?
A: There's not much information here that allows us to get to the bottom of the problem.
Because backgroundrb operates in the background, it can be quite hard to monitor/debug.
Here are some ideas I use:
*
*Write a unit test to test the worker code itself and make sure there are no problems there
*Put "puts" statements at multiple points in the code so you can at least see some responses while the worker is running.
*Wrap the entire worker in a begin..rescue..end block so that you can catch any errors that might be occurring and cutting the process short.
A: Thanks Andrew. Those debugging tips helped. Especially the begin..rescue..end block.
It was still a pain to debug though. In the end it wasn't BackgroundRB cutting short after 2.5 minutes. There was a network connection being made that wasn't being closed properly. Once that was found and closed, everything works great.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: List Comprehension Library for Scheme? I know there is a list-comprehension library for common lisp (incf-cl), I know they're supported natively in various other functional (and some non-functional) languages (F#, Erlang, Haskell and C#) - is there a list comprehension library for Scheme?
incf-cl is implemented in CL as a library using macros - shouldn't it be possible to use the same techniques to create one for Scheme?
A: *
*Swindle is primarily a CLOS emulator library, but it has list comprehensions too. I've used them, they're convenient, but the version I used was buggy and incomplete. (I just needed generic functions.)
*However, you probably want SRFI-42. I haven't used it, but it HAS to have fewer bugs than the Swindle list comprehensions.
I don't know which Scheme you use. PLT Scheme bundles Swindle and SRFI-42. Both are supposed to be cross-Scheme compatible, though.
If you use PLT Scheme, here is SRFI-42's man page. You say (require srfi/42) to get it.
A: You can use LINQ for R6RS Scheme (although it could be made to run under 'older' implementations).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Modifying/detecting Local Security Policy programmatically Is it possible to do at least one of the following:
1) Detect a setting of a Local Security Policy (Accounts: Limit local account use of blank passwords to console logon only)
2) Modify that setting
Using Win32/MFC?
A: I've been down this road before and ended up with:
http://groups.google.com/group/microsoft.public.platformsdk.security/browse_thread/thread/63d884134958cce7?pli=1
I was able to configure User Rights Assignments using the Lsa* functions in advapi32.dll but could never work out how to configure Security Options.
This may be of help though:
http://www.windowsdevcenter.com/pub/a/windows/2005/03/15/local_security_policies.html
http://support.microsoft.com/default.aspx?scid=214752
You could customise a template then run regsvr32 %windir%\system32\scecli.dll from inside your code.
Not elegant but might be a way.
A: Well, I think I figured out how to do the first part (detecting this setting). It's actually located in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
the key is "LimitBlankPasswordUse", if it's 1 then it's Enabled, otherwise Disabled.
So, reading that will at least show me if I need to tell the user to modify it or not. I doubt I can change it though...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Converting string to uint in actionscript / Flex I am creating a component and want to expose a color property as many flex controls do, lets say I have simple component like this, lets call it foo_label:
<mx:Canvas>
<mx:Script>
[Bindable] public var color:uint;
</mx:Script>
<mx:Label text="foobar" color="{color}" />
</mx:Canvas>
and then add the component in another mxml file, something along the lines of:
<foo:foo_label color="red" />
When I compile the compiler complains: cannot parse value of type uint from text 'red'. However if I use a plain label I can do
<mx:Label text="foobar" color="red">
without any problems, and the color property is still type uint.
My question is how can I expose a public property so that I can control the color of my components text? Why can I use the string "red" as a uint field for the mx controls but cannot seem to do the same in a custom component, do I need to do something special?
Thanks.
A: Color is not a property, it is a style. You need to define the style like this:
[Style(name="labelColor", type="uint", format="Color" )]
(enclose it in tag if you define it directly in MXML). You then need to add some ActionScript to handle this style and apply it to whichever control you need, please refer to http://livedocs.adobe.com/flex/3/html/help.html?content=skinstyle_1.html for more information.
A: Here you are 2 of my utils functions:
public static function convertUintToString( color:uint ):String {
return color.toString(16);
}
public static function convertStringToUint(value:String, mask:String):uint {
var colorString:String = "0x" + value;
var colorUint:uint = mx.core.Singleton.getInstance("mx.styles::IStyleManager2").getColorName( colorString );
return colorUint;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's the easiest way to add a quote box to mediawiki? I installed mediawiki on my server as my personal knowledge base. Sometimes I copy some stuff from Web and paste to my wiki - such as tips & tricks from somebody's blog. How do I make the copied content appear in a box with border?
For example, the box at the end of this blog post looks pretty nice:
http://blog.dreamhost.com/2008/03/21/good-reminiscing-friday/
I could use the pre tag, but paragraphs in a pre tag won't wrap automatically.. Any ideas?
A: Mediawiki supports the div tag. Combine the div tag with some styles:
<div style="background-color: cyan; border-style: dashed;">
A bunch of text that will wrap.
</div>
You can play around with whatever css attributes you want, but that should get you started.
A: I used the code from @steve k Changing light-grey to black and adding padding between the border and text. I found the light-grey nearly invisible and the text was directly adjacent to the border.
<blockquote style="
color: black;
border: solid thin gray;
padding-top: 10px;
padding-right: 10px;
padding-bottom: 10px;
padding-left: 10px;
">
{{{1}}}
</blockquote>
A: I made a template in my wiki called Template:quote, which contains the following content:
<div style="background-color: #ddf5eb; border-style: dotted;">
{{{1}}}
</div>
Then I can use the template in a page, e.g.,
{{quote|a little test}}
Works pretty well - Thanks!
A: <blockquote style="background-color: lightgrey; border: solid thin grey;">
Det er jeg som kjenner hemmeligheten din. Ikke et pip, gutten min.
</blockquote>
The blockquotes are better than divs because they "explain" that the text is actually a blockqoute, and not "just-some-text". Also a blockquote will most likely be properly indented, and actually look like a blockqoute.
A: To combine the two mostly valid answers, you should use a MediaWiki template that itself utilizes a blockquote.
The content of the template:
<blockquote style="color: lightgrey; border: solid thin gray;">
{{{1}}}
</blockquote>
Usage on your WIKI page (assuming you named the template "quote"):
{{ quote | The text you want to quote }}
A: You can use index.php?title=MediaWiki:Common.css page for this purpose and set a CSS style for the <blockquote/> element there:
blockquote {
background-color: #ddf5eb;
border-style: dotted;
}
In a similar fashion you can style <pre/> which is useful for code snippets etc. so that it wraps content:
pre {
white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;
}
For longer code snippets you may want to use <syntaxhighlight/> (or <source/>) element that comes with SyntaxHighlight extension. You can style it too.
A: Set a width in the pre tag, and it will wrap.
<pre width="80%">
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: What is the best way to implement a singleton pattern class in Actionscript 3? Since AS3 does not allow private constructors, it seems the only way to construct a singleton and guarantee the constructor isn't explicitly created via "new" is to pass a single parameter and check it.
I've heard two recommendations, one is to check the caller and ensure it's the static getInstance(), and the other is to have a private/internal class in the same package namespace.
The private object passed on the constructor seems preferable but it does not look like you can have a private class in the same package. Is this true? And more importantly is it the best way to implement a singleton?
A: I've been using this for some time, which I believe I originally got from wikipedia of all places.
package {
public final class Singleton {
private static var instance:Singleton = new Singleton();
public function Singleton() {
if( Singleton.instance ) {
throw new Error( "Singleton and can only be accessed through Singleton.getInstance()" );
}
}
public static function getInstance():Singleton {
return Singleton.instance;
}
}
}
Here's an interesting summary of the problem, which leads to a similar solution.
A: A slight adaptation of enobrev's answer is to have instance as a getter. Some would say this is more elegant. Also, enobrev's answer won't enforce a Singleton if you call the constructor before calling getInstance. This may not be perfect, but I have tested this and it works. (There is definitely another good way to do this in the book "Advanced ActionScrpt3 with Design Patterns" too).
package {
public class Singleton {
private static var _instance:Singleton;
public function Singleton(enforcer:SingletonEnforcer) {
if( !enforcer)
{
throw new Error( "Singleton and can only be accessed through Singleton.getInstance()" );
}
}
public static function get instance():Singleton
{
if(!Singleton._instance)
{
Singleton._instance = new Singleton(new SingletonEnforcer());
}
return Singleton._instance;
}
}
}
class SingletonEnforcer{}
A: The pattern which is used by Cairngorm (which may not be the best) is to throw a runtime exception in the constructor if the constructor is being called a second time. For Example:
public class Foo {
private static var instance : Foo;
public Foo() {
if( instance != null ) {
throw new Exception ("Singleton constructor called");
}
instance = this;
}
public static getInstance() : Foo {
if( instance == null ) {
instance = new Foo();
}
return instance;
}
}
A: You can get a private class like so:
package some.pack
{
public class Foo
{
public Foo(f : CheckFoo)
{
if (f == null) throw new Exception(...);
}
}
static private inst : Foo;
static public getInstance() : Foo
{
if (inst == null)
inst = new Foo(new CheckFoo());
return inst;
}
}
class CheckFoo
{
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: object reference not set to an instance of object I have been getting an error in VB .Net
object reference not set to an instance of object.
Can you tell me what are the causes of this error ?
A: Let's deconstruct the error message.
"object reference" means a variable you used in your code which referenced an object. The object variable could have been declared by you the or it you might just be using a variable declared inside another object.
"instance of object" Means that the object is blank (or in VB speak, "Nothing"). When you are dealing with object variables, you have to create an instance of that object before referencing it.
"not set to an " means that you tried to access an object, but there was nothing inside of it for the computer to access.
If you create a variable like
Dim aPerson as PersonClass
All you have done was tell the compiler that aPerson will represent a person, but not what person.
You can create a blank copy of the object by using the "New" keyword. For example
Dim aPerson as New PersonClass
If you want to be able to test to see if the object is "nothing" by
If aPerson Is Nothing Then
aPerson = New PersonClass
End If
Hope that helps!
A: sef,
If the problem is with Database return results, I presume it is in this scenario:
dsData = getSQLData(conn,sql, blah,blah....)
dt = dsData.Tables(0) 'Perhaps the obj ref not set is occurring here
To fix that:
dsData = getSQLData(conn,sql, blah,blah....)
If dsData.Tables.Count = 0 Then Exit Sub
dt = dsData.Tables(0) 'Perhaps the obj ref not set is occurring here
edit: added code formatting tags ...
A: In general, under the .NET runtime, such a thing happens whenever a variable that's unassigned or assigned the value Nothing (in VB.Net, null in C#) is dereferenced.
Option Strict On and Option Explicit On will help detect instances where this may occur, but it's possible to get a null/Nothing from another function call:
Dim someString As String = someFunctionReturningString();
If ( someString Is Nothing ) Then
Sysm.Console.WriteLine(someString.Length); // will throw the NullReferenceException
End If
and the NullReferenceException is the source of the "object reference not set to an instance of an object".
A: And if you think it's occuring when no data is returned from a database query then maybe you should test the result before doing an operation on it?
Dim result As String = SqlCommand.ExecuteScalar() 'just for scope'
If result Is Nothing OrElse IsDBNull(result) Then
'no result!'
End If
A: The object has not been initialized before use.
At the top of your code file type:
Option Strict On
Option Explicit On
A: You can put a logging mechanism in your application so you can isolate the cause of the error. An Exception object has the StackTrace property which is a string that describes the contents of the call stack, with the most recent method call appearing first. By looking at it, you'll have more details on what might be causing the exception.
A: When working with databases, you can get this error when you try to get a value form a field or row which doesn't exist. i.e. if you're using datasets and you use:
Dim objDt as DataTable = objDs.Tables("tablename")
you get the object "reference not set to an instance of object" if tablename doesn't exists in the Dataset. The same for rows or fields in the datasets.
A: Well, Error is explaining itself. Since You haven't provided any code sample, we can only say somewhere in your code, you are using a Null object for some task. I got same Error for below code sample.
Dim cmd As IDbCommand
cmd.Parameters.Clear()
As You can see I am going to Clear a Null Object. For that, I'm getting Error
"object reference not set to an instance of an object"
Check your code for such code in your code. Since you haven't given code example we can't highlight the code :)
A: In case you have a class property , and multiple constructors, you must initialize the property in all constructors.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Fetch one row per account id from list, part 2 Not sure how to ask a followup on SO, but this is in reference to an earlier question:
Fetch one row per account id from list
The query I'm working with is:
SELECT *
FROM scores s1
WHERE accountid NOT IN (SELECT accountid FROM scores s2 WHERE s1.score < s2.score)
ORDER BY score DESC
This selects the top scores, and limits results to one row per accountid; their top score.
The last hurdle is that this query is returning multiple rows for accountids that have multiple occurrences of their top score. So if accountid 17 has scores of 40, 75, 30, 75 the query returns both rows with scores of 75.
Can anyone modify this query (or provide a better one) to fix this case, and truly limit it to one row per account id?
Thanks again!
A: select accountid, max(score) from scores group by accountid;
A: If you're only interested in the accountid and the score, then you can use the simple GROUP BY query given by Paul above.
SELECT accountid, MAX(score)
FROM scores
GROUP BY accountid;
If you need other attributes from the scores table, then you can get other attributes from the row with a query like the following:
SELECT s1.*
FROM scores AS s1
LEFT OUTER JOIN scores AS s2 ON (s1.accountid = s2.accountid
AND s1.score < s2.score)
WHERE s2.accountid IS NULL;
But this still gives multiple rows, in your example where a given accountid has two scores matching its maximum value. To further reduce the result set to a single row, for example the row with the latest gamedate, try this:
SELECT s1.*
FROM scores AS s1
LEFT OUTER JOIN scores AS s2 ON (s1.accountid = s2.accountid
AND s1.score < s2.score)
LEFT OUTER JOIN scores AS s3 ON (s1.accountid = s3.accountid
AND s1.score = s3.score AND s1.gamedate < s3.gamedate)
WHERE s2.accountid IS NULL
AND s3.accountid IS NULL;
A: If your RDBMS supports them, then an analytic function would be a good approach particularly if you need all the columns of the row.
select ...
from (
select accountid,
score,
...
row_number() over
(partition by accountid
order by score desc) score_rank
from scores)
where score_rank = 1;
The row returned is indeterminate in the case you describe, but you can easily modify the analytic function, for example by ordering on (score desc, test_date desc) to get the more recent of two matching high scores.
Other analytic functions based on rank will achieve a similar purpose.
If you don't mind duplicates then the following would probably me more efficient than your current method:
select ...
from (
select accountid,
score,
...
max(score) over (partition by accountid) max_score
from scores)
where score = max_score;
A: If you are selecting a subset of columns then you can use the DISTINCT keyword to filter results.
SELECT DISTINCT UserID, score
FROM scores s1
WHERE accountid NOT IN (SELECT accountid FROM scores s2 WHERE s1.score < s2.score)
ORDER BY score DESC
A: Does your database support distinct? As in select distinct x from y?
A: This solutions works in MS SQL, giving you the whole row.
SELECT *
FROM scores
WHERE scoreid in
(
SELECT max(scoreid)
FROM scores as s2
JOIN
(
SELECT max(score) as maxscore, accountid
FROM scores s1
GROUP BY accountid
) sub ON s2.score = sub.maxscore AND s2.accountid = s1.accountid
GROUP BY s2.score, s2.accountid
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Do fancy MVC URLs affect how caching is done? When reading some answers to aquestion on clearing cache for JS files, somebody pointed to this part of the http spec. It basically says that URLS containing a ? should not be pulled from the cache, unless a specific expiry date is given. How do query string absent URLs which are so common with MVC websites (RoR, ASP.Net MVC, etc.) get cached, and is the behaviour different then with more traditional query string based urls?
A: AFAIK there is no difference on the part of browsers as both Firefox and IE will (incorrectly) cache the response from a url with a querystring, in the same way they cache the response from a url without a querystring. In the case of Safari it respects the spec and doesn't cache urls with querystrings. HTTP proxies tend to be a tad errectic with what they consider cacheable.
It pays to have the headers set correctly and it's worth investigating ETags.
A: I believe you manage caching in ASP.NET MVC using the OutputCache attribute (on your controller methods).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: iPhone viewWillAppear not firing I've read numerous posts about people having problems with viewWillAppear when you do not create your view hierarchy just right. My problem is I can't figure out what that means.
If I create a RootViewController and call addSubView on that controller, I would expect the added view(s) to be wired up for viewWillAppear events.
Does anyone have an example of a complex programmatic view hierarchy that successfully receives viewWillAppear events at every level?
Apple's Docs state:
Warning: If the view belonging to a view controller is added to a view hierarchy directly, the view controller will not receive this message. If you insert or add a view to the view hierarchy, and it has a view controller, you should send the associated view controller this message directly. Failing to send the view controller this message will prevent any associated animation from being displayed.
The problem is that they don't describe how to do this. What does "directly" mean? How do you "indirectly" add a view?
I am fairly new to Cocoa and iPhone so it would be nice if there were useful examples from Apple besides the basic Hello World crap.
A: I just had the same issue. In my application I have 2 navigation controllers and pushing the same view controller in each of them worked in one case and not in the other. I mean that when pushing the exact same view controller in the first UINavigationController, viewWillAppear was called but not when pushed in the second navigation controller.
Then I came across this post UINavigationController should call viewWillAppear/viewWillDisappear methods
And realized that my second navigation controller did redefine viewWillAppear. Screening the code showed that I was not calling
[super viewWillAppear:animated];
I added it and it worked !
The documentation says:
If you override this method, you must call super at some point in your implementation.
A: If you use a navigation controller and set its delegate, then the view{Will,Did}{Appear,Disappear} methods are not invoked.
You need to use the navigation controller delegate methods instead:
navigationController:willShowViewController:animated:
navigationController:didShowViewController:animated:
A: I've been using a navigation controller. When I want to either descend to another level of data or show my custom view I use the following:
[self.navigationController pushViewController:<view> animated:<BOOL>];
When I do this, I do get the viewWillAppear function to fire. I suppose this qualifies as "indirect" because I'm not calling the actual addSubView method myself. I don't know if this is 100% applicable to your application since I can't tell if you're using a navigation controller, but maybe it will provide a clue.
A: Firstly, the tab bar should be at the root level, ie, added to the window, as stated in the Apple documentation. This is key for correct behavior.
Secondly, you can use UITabBarDelegate / UINavigationBarDelegate to forward the notifications on manually, but I found that to get the whole hierarchy of view calls to work correctly, all I had to do was manually call
[tabBarController viewWillAppear:NO];
[tabBarController viewDidAppear:NO];
and
[navBarController viewWillAppear:NO];
[navBarController viewDidAppear:NO];
.. just ONCE before setting up the view controllers on the respective controller (right after allocation). From then on, it correctly called these methods on its child view controllers.
My hierarchy is like this:
window
UITabBarController (subclass of)
UIViewController (subclass of) // <-- manually calls [navController viewWill/DidAppear
UINavigationController (subclass of)
UIViewController (subclass of) // <-- still receives viewWill/Did..etc all the way down from a tab switch at the top of the chain without needing to use ANY delegate methods
Just calling the mentioned methods on the tab/nav controller the first time ensured that ALL the events were forwarded correctly. It stopped me needing to call them manually from the UINavigationBarDelegate / UITabBarControllerDelegate methods.
Sidenote:
Curiously, when it didn't work, the private method
- (void)transitionFromViewController:(UIViewController*)aFromViewController toViewController:(UIViewController*)aToViewController
.. which you can see from the callstack on a working implementation, usually calls the viewWill/Did.. methods but didn't until I performed the above (even though it was called).
I think it is VERY important that the UITabBarController is at window level though and the documents seem to back this up.
Hope that was clear(ish), happy to answer further questions.
A: Views are added "directly" by calling [view addSubview:subview].
Views are added "indirectly" by methods such as tab bars or nav bars that swap subviews.
Any time you call [view addSubview:subviewController.view], you should then call [subviewController viewWillAppear:NO] (or YES as your case may be).
I had this problem when I implemented my own custom root-view management system for a subscreen in a game. Manually adding the call to viewWillAppear cured my problem.
A: As no answer is accepted and people (like I did) land here I give my variation. Though I am not sure that was the original problem. When the navigation controller is added as a subview to a another view you must call the viewWillAppear/Dissappear etc. methods yourself like this:
- (void) viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
[subNavCntlr viewWillAppear:animated];
}
- (void) viewWillDisappear:(BOOL)animated
{
[super viewWillDisappear:animated];
[subNavCntlr viewWillDisappear:animated];
}
Just to make the example complete. This code appears in my ViewController where I created and added the the navigation controller into a view that I placed on the view.
- (void)viewDidLoad {
// This is the root View Controller
rootTable *rootTableController = [[rootTable alloc]
initWithStyle:UITableViewStyleGrouped];
subNavCntlr = [[UINavigationController alloc]
initWithRootViewController:rootTableController];
[rootTableController release];
subNavCntlr.view.frame = subNavContainer.bounds;
[subNavContainer addSubview:subNavCntlr.view];
[super viewDidLoad];
}
the .h looks like this
@interface navTestViewController : UIViewController <UINavigationControllerDelegate> {
IBOutlet UIView *subNavContainer;
UINavigationController *subNavCntlr;
}
@end
In the nib file I have the view and below this view I have a label a image and the container (another view) where i put the controller in. Here is how it looks. I had to scramble some things as this was work for a client.
A: Correct way to do this is using UIViewController containment api.
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
UIViewController *viewController = ...;
[self addChildViewController:viewController];
[self.view addSubview:viewController.view];
[viewController didMoveToParentViewController:self];
}
A: I've run into this same problem. Just send a viewWillAppear message to your view controller before you add it as a subview. (There is one BOOL parameter which tells the view controller if it's being animated to appear or not.)
[myViewController viewWillAppear:NO];
Look at RootViewController.m in the Metronome example.
(I actually found Apple's example projects great. There's a LOT more than HelloWorld ;)
A: I use this code for push and pop view controllers:
push:
[self.navigationController pushViewController:detaiViewController animated:YES];
[detailNewsViewController viewWillAppear:YES];
pop:
[[self.navigationController popViewControllerAnimated:YES] viewWillAppear:YES];
.. and it works fine for me.
A: A very common mistake is as follows.
You have one view, UIView* a, and another one, UIView* b.
You add b to a as a subview.
If you try to call viewWillAppear in b, it will never be fired, because it is a subview of a
A: iOS 13 bit my app in the butt here. If you've noticed behavior change as of iOS 13 just set the following before you push it:
yourVC.modalPresentationStyle = UIModalPresentationFullScreen;
You may also need to set it in your .storyboard in the Attributes inspector (set Presentation to Full Screen).
This will make your app behave as it did in prior versions of iOS.
A: I finally found a solution for this THAT WORKS!
UINavigationControllerDelegate
I think the gist of it is to set your nav control's delegate to the viewcontroller it is in, and implement UINavigationControllerDelegate and it's two methods. Brilliant! I'm so excited i finally found a solution!
A: Thanks iOS 13.
ViewWillDisappear, ViewDidDisappear, ViewWillAppear and
ViewDidAppear won't get called on a presenting view controller on
iOS 13 which uses a new modal presentation that doesn't cover the
whole screen.
Credits are going to Arek Holko. He really saved my day.
A: I'm not 100% sure on this, but I think that adding a view to the view hierarchy directly means calling -addSubview: on the view controller's view (e.g., [viewController.view addSubview:anotherViewController.view]) instead of pushing a new view controller onto the navigation controller's stack.
A: I think that adding a subview doesn't necessarily mean that the view will appear, so there is not an automatic call to the class's method that it will
A: I think what they mean "directly" is by hooking things up just the same way as the xcode "Navigation Application" template does, which sets the UINavigationController as the sole subview of the application's UIWindow.
Using that template is the only way I've been able to get the Will/Did/Appear/Disappear methods called on the object ViewControllers upon push/pops of those controllers in the UINavigationController. None of the other solutions in the answers here worked for me, including implementing them in the RootController and passing them through to the (child) NavigationController. Those functions (will/did/appear/disappear) were only called in my RootController upon showing/hiding the top-level VCs, my "login" and navigationVCs, not the sub-VCs in the navigation controller, so I had no opportunity to "pass them through" to the Nav VC.
I ended up using the UINavigationController's delegate functionality to look for the particular transitions that required follow-up functionality in my app, and that works, but it requires a bit more work in order to get both the disappear and appear functionality "simulated".
Also it's a matter of principle to get it to work after banging my head against this problem for hours today. Any working code snippets using a custom RootController and a child navigation VC would be much appreciated.
A: In case this helps anyone. I had a similar problem where my ViewWillAppear is not firing on a UITableViewController. After a lot of playing around, I realized that the problem was that the UINavigationController that is controlling my UITableView is not on the root view. Once I fix that, it is now working like a champ.
A: [self.navigationController setDelegate:self];
Set the delegate to the root view controller.
A: I just had this problem myself and it took me 3 full hours (2 of which googling) to fix it.
What turned out to help was to simply delete the app from the device/simulator, clean and then run again.
Hope that helps
A: In my case problem was with custom transition animation.
When set modalPresentationStyle = .custom viewWillAppear not called
in custom transition animation class need call methods:
beginAppearanceTransition and endAppearanceTransition
A: For Swift. First create the protocol to call what you wanted to call in viewWillAppear
protocol MyViewWillAppearProtocol{func myViewWillAppear()}
Second, create the class
class ForceUpdateOnViewAppear: NSObject, UINavigationControllerDelegate {
func navigationController(_ navigationController: UINavigationController, willShow viewController: UIViewController, animated: Bool){
if let updatedCntllr: MyViewWillAppearProtocol = viewController as? MyViewWillAppearProtocol{
updatedCntllr.myViewWillAppear()
}
}
}
Third, make the instance of ForceUpdateOnViewAppear to be the member of the appropriate class that have the access to the Navigation Controller and exists as long as Navigation controller exists. It may be for example the root view controller of the navigation controller or the class that creates or present it. Then assign the instance of ForceUpdateOnViewAppear to the Navigation Controller delegate property as early as possible.
A: In my case that was just a weird bug on the ios 12.1 emulator. Disappeared after launching on real device.
A: I have created a class that solves this problem.
Just set it as a delegate of your navigation controller, and implement simple one or two methods in your view controller - that will get called when the view is about to be shown or has been shown via NavigationController
Here's the GIST showing the code
A: ViewWillAppear is an override method of UIViewController class so adding a subView will not call viewWillAppear, but when you present, push , pop, show , setFront Or popToRootViewController from a viewController then viewWillAppear for presented viewController will get called.
A: My issue was that viewWillAppear was not called when unwinding from a segue. The answer was to put a call to viewWillAppear(true) in the unwind segue in the View Controller that you segueing back to
@IBAction func unwind(for unwindSegue: UIStoryboardSegue, ViewController subsequentVC: Any) {
viewWillAppear(true)
}
A: You should only have 1 UIViewController active at any time. Any subviews you want to manipulate should be exactly that - subVIEWS - i.e. UIView.
I use a simlple technique for managing my view hierarchy and have yet to run into a problem since I started doing things this way. There are 2 key points:
*
*a single UIViewController should be used to manage "a screen's worth"
of your app
*use UINavigationController for changing views
What do I mean by "a screen's worth"? It's a bit vague on purpose, but generally it's a feature or section of your app. If you've got a few screens with the same background image but different overlays/popups etc., that should be 1 view controller and several child views. You should never find yourself working with 2 view controllers. Note you can still instantiate a UIView in one view controller and add it as a subview of another view controller if you want certain areas of the screen to be shown in multiple view controllers.
As for UINavigationController - this is your best friend! Turn off the navigation bar and specify NO for animated, and you have an excellent way of switching screens on demand. You can push and pop view controllers if they're in a hierarchy, or you can prepare an array of view controllers (including an array containing a single VC) and set it to be the view stack using setViewControllers. This gives you total freedom to change VC's, while gaining all the advantages of working within Apple's expected model and getting all events etc. fired properly.
Here's what I do every time when I start an app:
*
*start from a window-based app
*add a UINavigationController as the window's rootViewController
*add whatever I want my first UIViewController to be as the rootViewController of the nav
controller
(note starting from window-based is just a personal preference - I like to construct things myself so I know exactly how they are built. It should work fine with view-based template)
All events fire correctly and basically life is good. You can then spend all your time writing the important bits of your app and not messing about trying to manually hack view hierarchies into shape.
A: I'm not sure this is the same problem that I solved.
In some occasions, method doesn't executed with normal way such as "[self methodOne]".
Try
- (void)viewWillAppear:(BOOL)animated
{
[self performSelector:@selector(methodOne)
withObject:nil afterDelay:0];
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118"
}
|
Q: In JavaScript, is there a source for time with a consistent resolution in milliseconds? The Date object in JavaScript performs differently machine to machine and browser to browser in respect to the function's resolution in milliseconds. I've found most machines have a resolution of about 16 ms on IE, where Chrome or Firefox may have a resolution as good as 1ms.
Is there another function available to JavaScript in general or IE specifically that will give a better time resolution? I am trying to trap and record keyDown and keyUp times in milliseconds and need it in the +/- 10 ms range or less.
To see an illustration of this, check out the "resolutions of new date()" section of this page. There is a table with a test button that evaluates the current machine/browser's JavaScript time resolution in milliseconds. Interestingly, Chrome regularly gets a resolution of 1ms.
http://www.merlyn.demon.co.uk/js-dates.htm#OV
My quest is for a JavaScript date-time method that will give sub 10ms resolution across browsers. something to replace or improve Date().
A: Since you are mentioning Internet Explorer, I assume that you are working on Windows. The 15 ms resolution you are getting may have to do with the Windows system timer resolution.
I've also noticed through running Java programs on Windows, that the resolution of the system timer is around 16 ms or so. (Using the System.currentTimeMillis() method.)
I did a quite search to see if I could find any information on the system timer resolution in Windows, and was able to find a link to Inside Windows NT High Resolution Timers from TechNet. It mentioned a little bit about the resolution of the Windows system timer:
Windows NT bases all of its timer
support off of one system clock
interrupt, which by default runs at a
10 millisecond granularity. This is
therefore the resolution of standard
Windows timers.
(I'm assuming that Windows XP and Vista still has the same timer, consider it is a descendent of NT.)
Unless Firefox and Chrome have their own high-resolution timer implemented, I believe that the best resolution you'll be able to get from a browser on the Windows platform is going to be around 10 ms.
Although not relevant to this question, I also did find an article on MSDN on high-resolution timers on Windows: mplement a Continuously Updating, High-Resolution Time Provider for Windows
A: High resolution timing is on a desktop machine is still an open topic.
Todays popular operating systems provide you only with a granularity of 10 ms, because that's the frequency of their clock timer interrupt. You will find the 10 ms also in Linux manpages, for example. The browser will only expose the timers provided by the operating system, with added browser-internal overhead.
That said, it is possible to achive a higher granularity. But all these techniques are specific to the hardware setup and you cannot expect them to be exposed through JavaScript in the near future.
A: AFAIK, milliseconds is as good as it gets in JavaScript. Here is the Mozilla.org documentation for the Date object. Nothing in there indicates anything with finer resolution.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What's the quickest way to dump & load a MySQL InnoDB database using mysqldump? I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.
What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?
As well, when loading the data into the second DB, is it quicker to:
1) pipe the results directly to the second MySQL server instance and use the --compress option
or
2) load it from a text file (ie: mysql < my_sql_dump.sql)
A: Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.
A: QUICKLY dumping a quiesced database:
Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time).
Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file.
Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary.
Dumping data from a LIVE database (w/o service interruption):
The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving.
Sadly, -T is not compatible with --single-transaction, so you only get one.
Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist.
Transferring the dump over the Network is usually a win
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878
This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished.
Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern.
Lastly, I want to clear up one point of common confusion.
Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default:
*
*--opt
*--add-drop-table
*--add-locks
*--create-options
*--disable-keys
*--extended-insert
*--lock-tables
*--quick
*--set-charset.
From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html:
Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.
Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table).
A: i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.
A: For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.
Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)
A: Use mk-parallel-dump tool from Maatkit.
At least that would probably be faster. I'd trust mysqldump more.
How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)
On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.
A: mydumper is a good choice, with paralel export, even paralell threads per table, and compressed files, see:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How big of a security risk is checking out an svn project right into production site? Not that I'm doing something like that, but I am kind of interested how bad a practice like that is.
A: I can't necessarily comment on the security risks involved, but it could put you in a situation where unvetted/not fully tested code ends up getting into the production environment. If you're considering using svn as method for distributing the source into different environments (dev,testing,production,etc), I'd suggest you take the following approach:
Have a section of the tree that's kept stable (most likely a branch), and make it someone's responsibility as a gate keeper to that branch. All commits to 'stable' will have to go through them and they will be responsible with making sure nothing goes in without verification. This position can be rotated on a weekly or monthly basis if no one wants to do it for very long.
Also, if you just want to do an adhoc dump from subversion to production periodically then you can use the 'svn export' command.
Finally, I'm guessing that this is web development if all you need to do is checkout to setup a production environment. If this is the case then make sure the user that the web server runs under does not have read access on the '.svn' directories storing the subversion metadata.
A: None as long as your server forbids access to all .svn directories from the web.
A: I don't consider it to be a security risk or bad practice at all. It's hugely convenient and something I'd probably do in future projects as a matter of course.
As an example, Capistrano (a rails automated deployment solution) is built around checking your code out from SVN onto your production servers.
There are some dumb things you could do which might make it a bad practice, but they are all easily mitigated. For example:
*
*Exposing your svn repo to the web with no password protection - Don't do this!
*Exposing your svn repo using http instead of https, so people sniffing your traffic can get your passwords - Again, don't do this! Just run it over https instead.
*Checking your code out using an account with svn read/write access. Personally I wouldn't worry about this last step, as if they compromise your production server you have bigger problems, and you can easily just roll back whatever changes they may try to commit to svn. If you were extremely paranoid you could just make a readonly svn account for production checkouts.
*Checking out your trunk to production - This is only an issue if you run with an unstable trunk, you can just check out your stable branches/tags for deployment instead.
A: There are already some great answers. But let me try to quantify the risks in some way.
Suppose that 2 months ago, the risk of a trojan were small enough to be acceptable. Along comes Kaminsky's DNS attack and presto the risk of a trojan just went up from a theoretical active attack to something in the "script kiddie" realm. This is because most public subversion projects either use http or if https, they don't use a certificate with full cert chain. Then all an adversary needs to do is poison the DNS and clone the SVN server, with there own trojan.
A: Well, if this code you are checking out is baselined(stable) I don't think is much of a problem.
But you certainly should tag the code, so you know later what you put there.
A: Probably safer than a copy from a test server, at least you are sure that you are getting the correct version of all files and all files are copied.
A: If we are talking about application stability (or code), there is always a risk during deployment.
But other than that, what is the security risk if you can use https as opposed of http. Or you even use the SSH gateway.
A: Here's how I'd do it:
Assumptions:
*
*Project under a single root folder
(projectroot)
*All files under version control
Steps
1 Ensure there's a tag for the "new" production version
2 Check out or export that tag into folder projectroot.new
3 Stop the service
4 Rename projectroot.old << projectroot << projectroot.new
5 Restart the service
6 If you need to fall back, reverse step 4
Reasoning
This is to make the actual implementation and fallback steps as elementary as possible. You could simply use svn switch, but any problems when falling back could leave you with a broken system.
Clearly this is the simplest possible case - no migration of data, no unversioned config files, and so on; but I think the key is in building a copy tree then swapping to give you a clean crisp switchover and fallback.
A: I don't necessarily like the idea of checking out a repository directly to deployment. Specifically, do you need every single file (like testcases) deployed to production? Also, will you at somepoint in the future have any generated code? Its probably better to have a build system that builds a distribution to deploy.
However, in lieu of any of those decisions, make sure you write down the revision of the repository you are syncing. This way, the sync is reproduceable, and if a bug comes up in production that you cannot reproduce, you can sync your local repository back to a state where it is consistent with production.
A: I find using SVN highly convenient and reliable. We have a policy of keeping trunk stable and making non-critical changes in branches, which are later merged into trunk shortly prior to a release date.
It makes releasing as simple as executing 'svn up' for smaller/less complex projects. Simplifying deployment makes it easier for non-developers (sysadmin, on-call support, et al) to quickly revert problematic changes in the event that the relevant developer is not available. In the event of trouble with a new release it's simply a matter of rolling back to the last known stable copy.
My only real concern would be visiblity of SVN metadata. Be sure that you've setup your web server to deny access to .svn directories (and all files contained within). You could use svn export, or delete the SVN metadata as part of your release process: find . -name .svn -print0 | xargs -0 rm -rf
What you don't want is someone surfing to www.example.com/.svn/entries which would reveal your source code repository, usernames and files. This is particularly bad if you've done silly things like "passwords.conf" which may readable to users (depending on server configuration), of course, that's not really the fault of SVN. As mentioned in other answers, you don't want to use HTTP either.
In short, as long as your metadata and SVN repository are secure then I see no cons, only benefits.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: OpenID Over SSL with self signed certificate I setup my own open id provider on my personal server, and added a redirect to https in my apache config file. When not using a secure connection (when I disable the redirect) I can log in fine, but with the redirect I can't log in with this error message:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.
I'm guessing that this is because I am using a self signed certificate.
Can anyone confirm if the self signed certificate is the issue? If not does anyone have any ideas what the problem is?
A: The primary benefit of using SSL for your OpenID URL is that it gives the relying party a mechanism to discover if DNS has been tampered with. It's impossible for the relying party to tell if an OpenID URL with a self-signed certificate has been compromised.
There are other benefits you get from using SSL on your provider's endpoint URL (easier to establish associations, no eavesdropping on the extension data) which would still hold if you used a self-signed cert, but I would consider those to be secondary.
A: OpenID is designed in a redirect-transparent way. As long as the necessary key/value pairs are preserved at each redirect, either by GET or POST, everything will operate correctly.
The easiest solution to achieve compatibility with consumers that do not work with self-signed certificates is to use a non-encrypted end-point which redirects checkid_immediate and checkid_setup messages to an encrypted one.
Doing this in your server code is easier than with web server redirects as the former can more easily deal with POST requests, while also keeping code together. Furthermore, you can use the same end-point to handle all OpenID operations, regardless whether or not it should be served over SSL, as long as proper checks are done.
For example, in PHP, the redirect can be as simple as:
// Redirect OpenID authentication requests to https:// of same URL
// Assuming valid OpenID operation over GET
if (!isset($_SERVER['HTTPS']) &&
($_GET['openid_mode'] == 'checkid_immediate' ||
$_GET['openid_mode'] == 'checkid_setup'))
http_redirect("https://{$_SERVER['HTTP_HOST']}{$_SERVER['REQUEST_URI']}");
As the openid.return_to value was generated against a plain HTTP end-point, as far as the consumer is concerned, it is only dealing with a non-encrypted server. Assuming proper OpenID 2.0 operation with sessions and nonces, whatever information passed between the consumer and your sever should not reveal exploitable information. Operations between your browser and the OpenID server, which are exploitable (password snooping or session cookie hijacking) are done over an encrypted channel.
Aside from keeping out eavesdroppers, having authentication operations be carried out over SSL allows you to use the secure HTTP cookie flag. This adds yet another layer of protection for checkid_immediate operations, should you wish to allow it.
A: (Disclaimer: I'm new to OpenID, so I might be wrong here.) The communication between the Open ID Consumer (e.g., StackOverflow) and the Open ID Provider (your server) does not require HTTPS -- it will work just as fine and just as securely over plain HTTP. What you need to do is to configure your server to switch to HTTPS only when it shows you your login page. In that case, only your browser needs to concern itself with the self-signed certificate. You could import the certificate onto your PC and everything will be as secure as with, say, Verisign-issued certificate.
A: It sounds like it. The client of your OpenID server doesn't trust the root certification authority.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Writing a cookie from an ASP.Net HTTPHandler - Page.Response object? I am creating an HTTP handler that listens for calls to a specific file type, and handles it accordingly. My HTTP Handler listens for .bcn files, then writes a cookie to the user's computer and sends back an image... this will be used in advertising banners so that the user is tagged as seeing the banner, and we can then offer special deals when they visit our site later.
The problem i'm having is getting access to the Page object... of course an HTTPHandler is not actually a page, and since the Response object lives within the Page object, I can't get access to it to write the cookie.
Is there a way around this, or do i need to revert back to just using a standard aspx page to do this?
Thanks heaps..
Greg
A: You can access the Response object from the HttpContext object passed to the ProcessRequest method from IHttpHandler. This is the same object exposed by Page.Response.
A: the ProcessRequest() method defined in IHttpHandler is passed a HttpContext reference. This HttpContext object will have a property named Response and Request, which you can use.
A: ah yes... thanks heaps cKramer :)
Working code is:
HttpContext.Current.Response.Cookies.Add(cookie);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How often do you update statistics in SQL Server 2000? I'm wondering if updating statistics has helped you before and how did you know to update them?
A: exec sp_updatestats
Yes, updating statistics can be very helpful if you find that your queries are not performing as well as they should. This is evidenced by inspecting the query plan and noticing when, for example, table scans or index scans are being performed instead of index seeks. All of this assumes that you have set up your indexes correctly.
There is also the UPDATE STATISTICS command, but I've personally never used that.
A: It's common to add your statistics update to a maintenance plan (as in an Enterprise Manager-defined Maintenance plan). That way it happens on a schedule - daily, weekly, whatever.
SQL Server 2000 uses statistics to make good decisions about query execution so they definitely help.
It's a good idea to rebuild your indexes at the same time (DBCC DBREINDEX and DBCC INDEXDEFRAG).
A: If you rebuild indexes, then the statistics for those indexes are automatically rebuilt.
If your timeframes allow, then running UPDATE STATISTICS of part of a maintenance plan is a good idea, as frequently as nightly (if your indexes are being rebuilt less frequently than that).
SQL Server: To determine if out of date statistics are the cause of a query performing poorly, turn on 'Query->Display Estimated Execution plan' (CTRL-L) in Management Studio and run the query. Open another window, paste in the same query and turn on 'Query->Display ActualExecution plan' (CTRL-M) in Management Studio and re-run the query. If the execution plans are different then statistics are most likely out of date.
A: Updating statistics becomes necessary after the following events:
- Records are inserted into your table
- Records are deleted from your table
- Records are updated in your table
If you have a large database with millions of records that gets lots of writes per day you probably should be determining an off-peak time to schedule index updates.
Also, you need to consider your type of traffic. If you have a lot (millions) of records in tables with many foreign key dependencies and you have a larger proportion of writes to reads you might want to consider turning off automatic statistics recomputation (NOTE: this feature will be removed in a future version of SQL Server, but for SQL Server 2000 you should be OK). This tells the engine to not recompute statistics on every INSERT, DELETE, or UPDATE and makes those actions much more performant.
Indexes are no laughing matter. They are the heart and soul of a performant database.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What's a good Systems Interface Specification template? A client has a number of disparate systems that they are planning to link together and have asked for a set of system interface specifications that would document the data and protocols used to interface the different parts.
The interfaces are between processes not between users.
Any recommendations for a template that we could use to document the system interfaces?
A: This book might have the sort of guidance you're looking for: Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I get a range's address including the worksheet name, but not the workbook name, in Excel VBA? If I have a Range object--for example, let's say it refers to cell A1 on a worksheet called Book1. So I know that calling Address() will get me a simple local reference: $A$1. I know it can also be called as Address(External:=True) to get a reference including the workbook name and worksheet name: [Book1]Sheet1!$A$1.
What I want is to get an address including the sheet name, but not the book name. I really don't want to call Address(External:=True) and try to strip out the workbook name myself with string functions. Is there any call I can make on the range to get Sheet1!$A$1?
A: Only way I can think of is to concatenate the worksheet name with the cell reference, as follows:
Dim cell As Range
Dim cellAddress As String
Set cell = ThisWorkbook.Worksheets(1).Cells(1, 1)
cellAddress = cell.Parent.Name & "!" & cell.Address(External:=False)
EDIT:
Modify last line to :
cellAddress = "'" & cell.Parent.Name & "'!" & cell.Address(External:=False)
if you want it to work even if there are spaces or other funny characters in the sheet name.
A: Ben is right. I also can't think of any way to do this. I'd suggest either the method Ben recommends, or the following to strip the Workbook name off.
Dim cell As Range
Dim address As String
Set cell = Worksheets(1).Cells.Range("A1")
address = cell.address(External:=True)
address = Right(address, Len(address) - InStr(1, address, "]"))
A: The Address() worksheet function does exactly that. As it's not available through Application.WorksheetFunction, I came up with a solution using the Evaluate() method.
This solution let Excel deals with spaces and other funny characters in the sheet name, which is a nice advantage over the previous answers.
Example:
Evaluate("ADDRESS(" & rng.Row & "," & rng.Column & ",1,1,""" & _
rng.Worksheet.Name & """)")
returns exactly "Sheet1!$A$1", with a Range object named rng referring the A1 cell in the Sheet1 worksheet.
This solution returns only the address of the first cell of a range, not the address of the whole range ("Sheet1!$A$1" vs "Sheet1!$A$1:$B$2"). So I use it in a custom function:
Public Function AddressEx(rng As Range) As String
Dim strTmp As String
strTmp = Evaluate("ADDRESS(" & rng.Row & "," & _
rng.Column & ",1,1,""" & rng.Worksheet.Name & """)")
If (rng.Count > 1) Then
strTmp = strTmp & ":" & rng.Cells(rng.Count) _
.Address(RowAbsolute:=True, ColumnAbsolute:=True)
End If
AddressEx = strTmp
End Function
The full documentation of the Address() worksheet function is available on the Office website: https://support.office.com/en-us/article/ADDRESS-function-D0C26C0D-3991-446B-8DE4-AB46431D4F89
A: Split(cell.address(External:=True), "]")(1)
A: I found the following worked for me in a user defined function I created. I concatenated the cell range reference and worksheet name as a string and then used in an Evaluate statement (I was using Evaluate on Sumproduct).
For example:
Function SumRange(RangeName as range)
Dim strCellRef, strSheetName, strRngName As String
strCellRef = RangeName.Address
strSheetName = RangeName.Worksheet.Name & "!"
strRngName = strSheetName & strCellRef
Then refer to strRngName in the rest of your code.
A: You may need to write code that handles a range with multiple areas, which this does:
Public Function GetAddressWithSheetname(Range As Range, Optional blnBuildAddressForNamedRangeValue As Boolean = False) As String
Const Seperator As String = ","
Dim WorksheetName As String
Dim TheAddress As String
Dim Areas As Areas
Dim Area As Range
WorksheetName = "'" & Range.Worksheet.Name & "'"
For Each Area In Range.Areas
' ='Sheet 1'!$H$8:$H$15,'Sheet 1'!$C$12:$J$12
TheAddress = TheAddress & WorksheetName & "!" & Area.Address(External:=False) & Seperator
Next Area
GetAddressWithSheetname = Left(TheAddress, Len(TheAddress) - Len(Seperator))
If blnBuildAddressForNamedRangeValue Then
GetAddressWithSheetname = "=" & GetAddressWithSheetname
End If
End Function
A: rngYourRange.Address(,,,TRUE)
Shows External Address, Full Address
A: The best way I found to do this is to use the following code:
Dim SelectedCell As String
'This message Box allows you to select any cell on any sheet and it will return it in the format of =worksheetname!$A$X" where X is any number.
SelectedCell = Application.InputBox("Select a Cell on ANY sheet in your workbook", "Bookmark", Type:=8).Address(External:=True)
SelectedCell = "=" & "'" & Right(SelectedCell, Len(SelectedCell) - Len("[" & ActiveWorkbook.Name & "]") - 1)
'Be sure to modify Sheet1.Cells(1,1) with the Sheet and cell you want to use as the destination. I'd recommend using the Sheets VBA name.
Sheet1.Cells(1, 1).Value = SelectedCell
How it works;
By Clicking on the desired cell when the message box appears. The string from "Address(External:=True)" (i.e ['[Code Sheet.xlsb]Settings'!$A$1) is then modified to remove the full name of the worksheet([Code Sheet.xlsb]).
Using the previous example it does this by taking the "Len" of the full length of;
[Code Sheet.xlsb]Settings'!$A$1 and subtracts it with the Len of ([Code Sheet.xlsb] -1). leaving you with Settings'!$A$1.
SelectedCell = "=" & "'" & Right(SelectedCell, Len(SelectedCell) - Len("[" & ActiveWorkbook.Name & "]") - 1)
The Code then its and "='" to insure that it will be seen as a Formula (='Settings'!$A$1).
Im not sure if it is only on Excel on IOS but for some reason you will get an Error Code if you add the "='" in any other way than "=" & "'" as seen bellow.
SelectedCell = "=" & "'" & Right....
From here all you need is to make the program in the Sheet and cell you want your new formula in.
Sheet1.Cells(1, 1).Value = SelectedCell
By Opening a new Workbook the full Code above will work as is.
This Code is Especially useful as changing the name of the workbook or the name of the sheet that you are selecting from in the message box will not result in bugs later on.
Thanks Everyone in the Forum before today I was not aware that External=True was a thing, it will make my coding a lot easier. Hope this can also help someone some day.
A: [edit on 2009-04-21]
As Micah pointed out, this only works when you have named that
particular range (hence .Name anyone?) Yeah, oops!
[/edit]
A little late to the party, I know, but in case anyone else catches this in a google search (as I just did), you could also try the following:
Dim cell as Range
Dim address as String
Set cell = Sheet1.Range("A1")
address = cell.Name
This should return the full address, something like "=Sheet1!$A$1".
Assuming you don't want the equal sign, you can strip it off with a Replace function:
address = Replace(address, "=", "")
A: Why not just return the worksheet name with
address = cell.Worksheet.Name
then you can concatenate the address back on like this
address = cell.Worksheet.Name & "!" & cell.Address
A: Dim rg As Range
Set rg = Range("A1:E10")
Dim i As Integer
For i = 1 To rg.Rows.Count
For j = 1 To rg.Columns.Count
rg.Cells(i, j).Value = rg.Cells(i, j).Address(False, False)
Next
Next
A: For confused old me a range
.Address(False, False, , True)
seems to give in format TheSheet!B4:K9
If it does not why the criteria .. avoid Str functons
will probably only take less a millisecond and use 153 already used electrons
about 0.3 Microsec
RaAdd=mid(RaAdd,instr(raadd,"]") +1)
or
'about 1.7 microsec
RaAdd= split(radd,"]")(1)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: What standard techniques are there for using cpu specific features in DLLs? Short version: I'm wondering if it's possible, and how best, to utilise CPU specific
instructions within a DLL?
Slightly longer version:
When downloading (32bit) DLLs from, say, Microsoft it seems that one size fits all processors.
Does this mean that they are strictly built for the lowest common denominator (ie. the
minimum platform supported by the OS)?
Or is there some technique that is used to export a single interface within the DLL but utilise
CPU specific code behind the scenes to get optimal performance? And if so, how is it done?
A: I don't know of any standard technique but if I had to make such a thing, I would write some code in the DllMain() function to detect the CPU type and populate a jump table with function pointers to CPU-optimized versions of each function.
There would also need to be a lowest common denominator function for when the CPU type is unknown.
You can find current CPU info in the registry here:
HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor
A: The DLL is expected to work on every computer WIN32 runs on, so you are stuck to the i386 instruction set in general. There is no official method of exposing functionality/code for specific instruction sets. You have to do it by hand and transparently.
The technique used basically is as follows:
- determine CPU features like MMX, SSE in runtime
- if they are present, use them, if not, have fallback code ready
Because you cannot let your compiler optimise for anything else than i386, you will have to write the code using the specific instruction sets in inline assembler. I don't know if there are higher-language toolkits for this. Determining the CPU features is straight forward, but could also need to be done in assembler.
A: An easy way to get the SSE/SSE2 optimizations is to just use the /arch argument for MSVC. I wouldn't worry about fallback--there is no reason to support anything below that unless you have a very niche application.
http://msdn.microsoft.com/en-us/library/7t5yh4fd.aspx
I believe gcc/g++ have equivalent flags.
A: DLLs you download from Microsoft are targeted for the generic x86 architecture for the simple reason that it has to work across all the multitude of machines out there.
Until the Visual Studio 6.0 time frame (I do not know if it has changed) Microsoft used to optimize its DLLs for size rather than speed. This is because the reduction in the overall size of the DLL gave a higher performance boost than any other optimization that the compiler could generate. This is because speed ups from micro optimization would be decidedly low compared to speed ups from not having the CPU wait for the memory. True improvements in speed come from reducing I/O or from improving the base algorithm.
Only a few critical loops that run at the heart of the program could benefit from micro optimizations simply because of the huge number of times they are invoked. Only about 5-10% of your code might fall in this category. You could rest assured that such critical loops would already be optimized in assembler by the Microsoft software engineers to some level and not leave much behind for the compiler to find. (I know it's expecting too much but I hope they do this)
As you can see, there would be only drawbacks from the increased DLL code that includes additional versions of code that are tuned for different architectures when most of this code is rarely used / are never part of the critical code that consumes most of your CPU cycles.
A: Intel's ICC can compile code twice, for different architectures. That way, you can have your cake and eat it. (OK, you get two cakes - your DLL will be bigger). And even MSVC2005 can do it for very specific cases (E.g. memcpy() can use SSE4)
There are many ways to switch between different versions. A DLL is loaded, because the loading process needs functions from it. Function names are converted into addresses. One solution is to let this lookup depend on not just function name, but also processor features. Another method uses the fact that the name to address function uses a table of pointers in an interim step; you can switch out the entire table. Or you could even have a branch inside critical functions; so foo() calls foo__sse4 when that's faster.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Streaming large result sets with MySQL 4.1.x + Connector/J I'm trying to get a large result set to stream and having no luck. The MySQL docs are somewhat unclear as to whether this should even work. E.g.:
When using versions of the JDBC driver earlier than 3.2.1, and connected to server versions earlier than 5.0.3, the `setFetchSize()` method has no effect, other than to toggle result set streaming as described above.
I'm using MySQL 4.1.19 with Connector/J 5.1.6. My code is basically:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
stmt.executeQuery(“select * from huge_table”);
Are streaming result sets possible with MySQL 4.1.x? And if so, how?
A: What you are looking for is called "unbuffered query" in MySQL lingo, but a quick search seems to indicate that:
*
*MySQL only supports it starting from version 5.0
*JDBC doesn't support it
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Resources for code generation of database objects I am working a project where I need to generate a series of classes to represent/access data in the database. Third party projects such as hibernate or subsonic are not an option. I am new in this subject domain, so I am looking for information on the topic. The project is in .net and I am using MyGeneration. I am primarily looking for information.
What is your single best resource for topics on code generation of data access?
Please post only one link at a time and look for your resource before posting. If you find your resource, please vote up instead of posting it. .
( I am not interesting in rep, just information)
A: Are you using .NET? Try MyGeneration
A: CodeSmith
A: ORAPig generates Python interfaces for Oracle packages. A Postgresql module is being worked on.
http://code.google.com/p/orapig
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Oracle Insert via Select from multiple tables where one table may not have a row I have a number of code value tables that contain a code and a description with a Long id.
I now want to create an entry for an Account Type that references a number of codes, so I have something like this:
insert into account_type_standard (account_type_Standard_id,
tax_status_id, recipient_id)
( select account_type_standard_seq.nextval,
ts.tax_status_id, r.recipient_id
from tax_status ts, recipient r
where ts.tax_status_code = ?
and r.recipient_code = ?)
This retrieves the appropriate values from the tax_status and recipient tables if a match is found for their respective codes. Unfortunately, recipient_code is nullable, and therefore the ? substitution value could be null. Of course, the implicit join doesn't return a row, so a row doesn't get inserted into my table.
I've tried using NVL on the ? and on the r.recipient_id.
I've tried to force an outer join on the r.recipient_code = ? by adding (+), but it's not an explicit join, so Oracle still didn't add another row.
Anyone know of a way of doing this?
I can obviously modify the statement so that I do the lookup of the recipient_id externally, and have a ? instead of r.recipient_id, and don't select from the recipient table at all, but I'd prefer to do all this in 1 SQL statement.
A: A slightly simplified version of Oglester's solution (the sequence doesn't require a select from DUAL:
INSERT INTO account_type_standard
(account_type_Standard_id, tax_status_id, recipient_id)
VALUES(
account_type_standard_seq.nextval,
(SELECT tax_status_id FROM tax_status WHERE tax_status_code = ?),
(SELECT recipient_id FROM recipient WHERE recipient_code = ?)
)
A: It was not clear to me in the question if ts.tax_status_code is a primary or alternate key or not. Same thing with recipient_code. This would be useful to know.
You can deal with the possibility of your bind variable being null using an OR as follows. You would bind the same thing to the first two bind variables.
If you are concerned about performance, you would be better to check if the values you intend to bind are null or not and then issue different SQL statement to avoid the OR.
insert into account_type_standard
(account_type_Standard_id, tax_status_id, recipient_id)
(
select
account_type_standard_seq.nextval,
ts.tax_status_id,
r.recipient_id
from tax_status ts, recipient r
where (ts.tax_status_code = ? OR (ts.tax_status_code IS NULL and ? IS NULL))
and (r.recipient_code = ? OR (r.recipient_code IS NULL and ? IS NULL))
A: Try:
insert into account_type_standard (account_type_Standard_id, tax_status_id, recipient_id)
select account_type_standard_seq.nextval,
ts.tax_status_id,
( select r.recipient_id
from recipient r
where r.recipient_code = ?
)
from tax_status ts
where ts.tax_status_code = ?
A: Outter joins don't work "as expected" in that case because you have explicitly told Oracle you only want data if that criteria on that table matches. In that scenario, the outter join is rendered useless.
A work-around
INSERT INTO account_type_standard
(account_type_Standard_id, tax_status_id, recipient_id)
VALUES(
(SELECT account_type_standard_seq.nextval FROM DUAL),
(SELECT tax_status_id FROM tax_status WHERE tax_status_code = ?),
(SELECT recipient_id FROM recipient WHERE recipient_code = ?)
)
[Edit]
If you expect multiple rows from a sub-select, you can add ROWNUM=1 to each where clause OR use an aggregate such as MAX or MIN. This of course may not be the best solution for all cases.
[Edit] Per comment,
(SELECT account_type_standard_seq.nextval FROM DUAL),
can be just
account_type_standard_seq.nextval,
A: insert into account_type_standard (account_type_Standard_id, tax_status_id, recipient_id)
select account_type_standard_seq.nextval,
ts.tax_status_id,
( select r.recipient_id
from recipient r
where r.recipient_code = ?
)
from tax_status ts
where ts.tax_status_code = ?
A: insert into received_messages(id, content, status)
values (RECEIVED_MESSAGES_SEQ.NEXT_VAL, empty_blob(), '');
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Evolutionary Algorithms: Optimal Repopulation Breakdowns It's really all in the title, but here's a breakdown for anyone who is interested in Evolutionary Algorithms:
In an EA, the basic premise is that you randomly generate a certain number of organisms (which are really just sets of parameters), run them against a problem, and then let the top performers survive.
You then repopulate with a combination of crossbreeds of the survivors, mutations of the survivors, and also a certain number of new random organisms.
Do that several thousand times, and efficient organisms arise.
Some people also do things like introduce multiple "islands" of organisms, which are seperate populations that are allowed to crossbreed once in awhile.
So, my question is: what are the optimal repopulation percentages?
I have been keeping the top 10% performers, and repopulating with 30% crossbreeds and 30% mutations. The remaining 30% is for new organisms.
I have also tried out the multiple island theory, and I'm interested in your results on that as well.
It is not lost on me that this is exactly the type of problem an EA could solve. Are you aware of anyone trying that?
Thanks in advance!
A: The best resources I've come across for GA and EA were John Koza's books on Genetic Programming. He covers the topic in depth - techniques for encoding the genome, random mutation, breeding, tuning the fitness function.
Personally I've only written a small handful of simulators for pedagogical purposes. What I found was that how I tuned those percentages was related to the particulars of the fitness function I was using, how much random mutation I had introduced and how 'smart' I had tried to make the mutation and breeding - I found that the less 'smart' I tried to make the mutator and the cross-over logic, the faster the population improved its fitness score - I also found that I had been too conservative in the probability of mutation -- my initial runs hit local maxima and had a hard time getting out of them.
None of this gives you concrete answers, but I don't think there are concrete answers, GA is unpredictable by its nature and tuning those kinds of parameters may still be a bit of an art. Of course you could always try a meta-GA, using those parameters as a chromosome, searching for settings that produce a more rapid fitness in the base GA you're running.
Depends on how 'meta' you want to get.
A: I initially tried to model what I thought organic systems were like. Ultimately decided that was no good, and went more aggressive, with 10% kept, 20% mutated, 60% crossbred, and 10% random.
Then I noticed my top 10% were all roughly identical. So I increased the random to 30%. That helped some, but not much.
I did try multiple island, and generation-skipping, and reseeding, which gave better results, but still highly unsatisfactory, very little variation in the top 10%, crazy-long numbers of generations to get any results. Mostly the code learned how to hack my fitness evaluation.
It's really easy to get top performers, so don't worry about keeping too many of them around. Crossbreeds help to pare down positive and negative traits, so they're useful, but really what you want to get is a lot of good random bred in. Focus on mutations and new randoms to bring in features, and let the crossbreeds and top performers just keep track of the best and refine them more slowly. IE: stuff based on the last generation is just finding a better local maxima, randoms find better global maxima.
I still believe optimal answers to your question can be found by observing natural phenomena, such as in a recent article regarding randomness of fruit-fly flight paths, so that may pan out.
Probably the best answer is to just run it and tweak it, don't be afraid to tweak it pretty heavily, the populations are robust. Make sure you implement a way to save and continue.
A: This is a hotly-debated (in the literature and Melanie, et al books) topic that seems to be very domain-specific. What works for one problem of one type with n parameters will almost never work for another problem, another domain, or another parametric set.
So, as TraumaPony suggested, tweak it yourself for each problem you are solving or write something to optimize it for you. The best thing you can do is keep track of all of your "knob-twiddling" and fine-tuning experiments so you can map out the solution terrain and get a feel for how to optimize within that space quickly. Also try alternative techniques like hill-climbing so you can have a baseline to beat.
@Kyle Burton: crossover vs. mutation rates are also constantly debated in each class of problems handed over to GAs and GPs.
A: Assuming you have a method for quantifying the top X% percent performers, I'd suggest that instead of using a hard coded threshold you analyze the performance distribution and make your cutoff somewhere in the range of the first major drop in performance, and then tuning your crossbreads, mutations, and new organisms to fill in the gaps. This way if you have a very "productive" run in which lots of variations were successful you don't throw a significant number of high performers. Also, if you have an "unproductive" run you can scrap more of the existing organisms in favor of more newer organisms that should be taking their place.
A: I have had some success increasing the diversity of population by setting mutation and crossover from a couple of the genes from the parent chromosomes.
This works until the mutation rate drops to zero; since it is likely that there will be a periodic evolutionary pressure to do this, you should try and make sure these genes have a minimum rate.
In practice, I opted for a multi-chromosome genotype. One chromosome coded for the other's reproductive function. The smaller 'reproduction chromosome' had a sensible fixed rates for mutation and crossover.
I found that this would stop the classic plateau and convergence of the population.
As an aside, I tend to do both crossover and mutation for each child.
For generational GAs, I try to shun elitism altogether, but where populating from multiple islands, I keep the top elite from each island. When the islands come together, then the elites can all breed together.
A: There would appear to be a few answers suggesting using a 2nd GA to determine optimum parameters for the 1st GA, with no mention of how to determine the optimum parameters for the 2nd. I can't help but wonder about the religious beliefs of those suggesting this approach...
A: As others have mentioned, the optimal mix would depend on your specific problem and other problem-specific factors such as the size of the solution space.
Before we discuss the evolution breakdown from one generation to the next, it's important to consider the size of each generation. Generally my approach is to start with a fairly large population (~100k-500k individuals) of fairly diverse individuals, which is something that Koza suggests in some of his work. To obtain this diversity from the start, you could divide up your solution space into buckets, and then ensure that at least a certain number of individuals falls into each bucket. (E.G. if you have a tree representation for each individual, ensure that equal amounts are created of depth 2, 3, ..., max_depth)
As far as your actual question, there's no clear way to approach it, but depending on your problem, you may want to emphasize randomness or de-emphasize it. When you want to emphasize it, you should keep less indivuduals intact, and introduce a higher number of new random individuals. You would generally like to do this if there are many local maximums in your solution space and you want to have a broader search.
When you obtain a break down there are a few things to consider... for one, duplication (a lot of identical or newly identical individuals at the top inbreeding). To reduce this you may want to sweep through your population between generations and replace duplications with new random individuals or crossbred ones.
That said, my current approach is to keep the top 1%, crossbreed the top 20% into a new 20%, crossbreed the top 40% into the next 20%, crossbreed the top 90% to generate the next 20%, and randomly generate the rest (39%). If there are duplicates, I remove them and replace them with new random individuals.
I don't use mutations because the high number of random individuals should take care of adding in "mutations" during the following crossbreeding.
A: You know what you could do... You could write a genetic algorithm to determine that optimal distribution.
But, usually I keep the top 12%, and 28% crossbreeds; with 30% each for the others.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Web Log analysis in Java How to read a web server log file in Java. This file is getting updated all the time. If I open a new FileInputStream, will it read the log real time?
Regards
A: Here is a solution based on RandomAccessFile:
http://www.informit.com/guides/content.aspx?g=java&seqNum=226
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: error installing RMagick from gem Trying to install the RMagick gem is failing with an error about being unable to find ImageMagick libraries, even though I'm sure they are installed.
The pertinent output from gem install rmagick is:
checking for InitializeMagick() in -lMagick... no
checking for InitializeMagick() in -lMagickCore... no
checking for InitializeMagick() in -lMagick++... no
Can't install RMagick 2.6.0. Can't find the ImageMagick library or one of the dependent libraries. Check the mkmf.log file for more detailed information.
*** extconf.rb failed ***
And looking in mkmf.log reveals:
have_library: checking for InitializeMagick() in -lMagick... -------------------- no
"/usr/local/bin/gcc -o conftest -I.
-I/usr/local/lib/ruby/1.8/i386-solaris2.10 -I. -I/usr/local/include/ImageMagick -I/usr/local/include/ImageMagick conftest.c -L. - L/usr/local/lib -Wl,-R/usr/local/lib -L/usr/local/lib -L/usr/local/lib -R/usr/local/lib -lfreetype -lz -L/usr/local/lib -L/usr/local/lib -lMagickCore -lruby-static - lMagick -ldl -lcrypt -lm -lc"
ld: fatal: library -lMagick: not found
ld: fatal: File processing errors. No output written to conftest
This is on Solaris 10 x86 with ImageMagick version 6.4.3 and RMagick version 2.6.0
If I need to add something to LDFLAGS, its not clear to me what that would be. I installed ImageMagick from source and it should be in the usual places. ie,
# ls -l /usr/local/lib/ | grep -i magick
drwxr-xr-x 5 root root 512 Sep 24 23:09 ImageMagick-6.4.3/
-rw-r--r-- 1 root root 10808764 Sep 25 02:09 libMagickCore.a
-rwxr-xr-x 1 root root 1440 Sep 25 02:09 libMagickCore.la*
-rw-r--r-- 1 root root 2327072 Sep 25 02:09 libMagickWand.a
-rwxr-xr-x 1 root root 1472 Sep 25 02:09 libMagickWand.la*
ImageMagick-6.4.3/ contains nothing interesting and I can't find any other files that I might be able to point gem install at.
Any advice would be much appreciated!!
googling hasn't been too helpful.
thanks -
A: problem solved.
RMagick was unable to find ImageMagick because I neglected to build the shared objects (there were no .so files installed as you can see from the "ls" in the original question). The solution was to add --with-shared to my configure options.
This however caused other problems. Most notably, make failing with "undefined symbol" messages for libiconv. This was solved by setting CFLAGS to point to libiconv:
export CFLAGS="-liconv"
Ultimately, my successful configure command was:
./configure --disable-static --with-modules --without-perl --with-quantum-depth=8 --with-bzlib=no --with-libiconv
and after that, make, make install, and gem install rmagick all worked smoothly.
thanks,
R
A: I ran into this problem on OpenSuSE 11.4 - after installing a whole load of packages it turned out that libtool was the missing element....
A: The linker cannot find libMagick in the standard places. Maybe you installed ImageMagick in a non standard place you have to specify via LDFLAGS?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is a .snk for? What is a .snk file for? I know it stands for Strongly Named Key, but all explanations of what it is and how it works goes over my head.
Is there any simple explanation on how a strongly named key is used and how it works?
A: The .snk file is used for signing the assemblies in order to be able to add them to the Global Assembly Cache (GAC).
The .snk file contains the public and private tokens for your key. When you want to sign some data (or binary) with that key, a checksum is calculated on the data, which is then encrypted with the private token. The encrypted checksum is then added to the data. Anyone can use the public token from your key to decrypt the checksum and compare it to the checksum they calculated to verify that the signed data hasn't been tampered.
You can read more about the public key cryptography at http://en.wikipedia.org/wiki/Public-key_cryptography.
A: A .snk file is a persisted version of your "Key" produced by the sn utility in the framework utility set.
You then use this file to 'digitally sign' your assemblies. It is a 2-part key.. private-public key combination. The public part of the key is published i.e. known to everyone. The private part is known to only you, the component/app developer and intended to be kept that way.
When you sign your assembly, it uses the private key and a hash value for your assembly to create a digital signature which is embedded in your assembly. Thereafter, anyone who loads your assembly goes through a verification step. The public key is used to validate if the assembly really comes from you.. you just need the public key for this (which is also embedded in a tokenized form in the assembly manifest). If the assembly has been tampered with, the hash value would be different and the assembly load would be aborted. This is a security mechanism.
A: An .snk file is used to ensure that someone else can't slip an assembly of their own in the place of yours. It provides a pair of encryption/decryption keys.
When an .snk file is used to sign an assembly, a hashcode value is calculated from the assembly file and encrypted using the private key. That encrypted "digest" is then tacked on to the assembly together with the public key from the .snk file.
Then when someone receives your assembly, they can also calculate that hashcode value. They use the public key to decrypt the one that you calculated and compare the calculated values. If the assembly had been changed at all, those values will be different and the user of the assembly will know that the assembly you have is not the one you provided.
In the context of BizTalk Server, whoever builds any custom assemblies that are used by your BizTalk solution will need to use a .snk file to sign the assembly so that BizTalk server can load it into the GAC and use it.
A:
In a the.Net world the SNK file is used to sign your compiled binaries.
This allows a couple things to happen:
*
*You can register the Assembly in the GAC (Global Assembly Cache). Basically so you can reference it from many places on the same machine without having to maintain multiple copies).
*You can use your Binaries from within other binaries that are also signed (this is a strange viral sort of behavior with regard to signed assemblies).
*Your assembly cannot (easily) be modified by 3rd parties who do not have access to the SNK file, providing at least a small amount of security.
I'm not familiar with how BizTalk server works, so I don't think I can shed much light on what specific purpose they serve within that environment.
Hope this was somewhat helpful.
A: The .snk file is used to apply a strong name to a .NET assembly. such a strong name consists of
a simple text name, version number,
and culture information (if
provided)—plus a public key and a
digital signature.
The SNK contains a unique key pair - a private and public key that can be used to ensure that you have a unique strong name for the assembly. When the assembly is strongly-named, a "hash" is constructed from the contents of the assembly, and the hash is encrypted with the private key. Then this signed hash is placed in the assembly along with the public key from the .snk.
Later on, when someone needs to verify the integrity of the strongly-named assembly, they build a hash of the assembly's contents, and use the public key from the assembly to decrypt the hash that came with the assembly - if the two hashes match, the assembly verification passes.
It's important to be able to verify assemblies in this way to ensure that nobody swaps out an assembly for a malicious one that will subvert the whole application. This is why non-strong-named assemblies aren't trusted in the same way that strongly-named assemblies are, so they can't be placed in the GAC. Also, there's a chain of trust - you can't generate a strongly-named assembly that references non-strongly-named assemblies.
The article "The Secrets of Strong Naming (archived at the Wayback Machine)". Does an excellent job of explaining these concepts in more detail. With pictures.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "169"
}
|
Q: Convert console exe to dll in C I am interested in calling SoX, an open source console application, from another Windows GUI program (written in Delphi naturally). Instead of dealing with scraping and hiding the console window, I would like to just convert the application to a DLL that I can call from my application.
Before I start down this path I am curious how much work I should expect to be in for? Are we talking a major undertaking, or is there a straight forward solution? I know some C, but am by no means an expert.
I am not expecting SoX specific details, just EXE console application conversion to DLL in general. If someone is familiar with SoX though, even better.
A: For the specific topic of turning a console executable into a library by modifying the C source code, it depends on how the command-line application is factored. If it's written in such a way that I/O is funneled through a small set of functions or even better function pointers, then obviously it will be trivial.
If it's all done with printf, scanf and friends, then you'll probably be best off by finding / creating an include file that all the source files include and adding a macro that redirects printf/scanf and friends to your own functions that are written so as to be amenable to DLL implementation. Things like printf can be built from vsnprintf (use the n-version for safety), so you don't need to reimplement the whole C RTL I/O subsystem. However, there is no vsscanf, but there are third-party implementations on the web.
If the code is using fprintf, fscanf, etc. to enable indirection between files and the console, you're still out of luck. The FILE structure is opaque, and unlike Pascal text files, a portable text file driver cannot be implemented. It might still be possible if you spelunk in your specific C RTL, but you'd be better advised going down the macro route and reimplementing your own renamed FILE type.
Finally, the "popen()" approach is possible in Delphi and made somewhat easier in Delphi 2009 with the TTextReader and TTextWriter classes. Combine these with TFileStream wrapped around pipes, and specify pipes for standard input, standard output and standard error in the new process and STARTF_USESTDHANDLES, etc., and it will work. If you don't feel like writing your own, there are third-party equivalents / samples on the web for Delphi too. Here's one.
A: In Windows, you just call CreateProcess with the SoX command line. I don't know the Delphi bindings for Win32, but I've done this exact thing in both Win32 and C#.
And now that you know CreateProcess is what you want to call, a google search on how to do that from Delphi should give you all the code you need.
Delphi Corner Article - Using CreateProcess to Execute Programs
Calling CreateProcess() the easy way
A: You might not even need a DLL, you can use the popen() function to run a console application and collect any output text.
A: Run the process, the way Indiv advised and capture the output like how Adam has shown.
However if you still want to do the DLL conversion, this will get you started
*
*Configure SOX for windows and compile it
*Create an empty DLL project using your C++ tool
*Add the SOX files to be part of the project
*Add a new Function called DLLMain
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID ) {return TRUE;}
*Add a .DEF file (use the project name as the file name) that lists the exports in the DLL - Add the following content to it
LIBRARY "name.DLL"
EXPORTS
CallOldMain PRIVATE
*Rename the main of SOX as CallOldMain
*Write a CUSTOM function to log the output / return error etc.
*Find all printfs / cout in the SOX application and replace it with calls to your custom function above
*Once the DLL is compiled you can now call the function CallOldMain with the same parameters main programs of C expects. You could modify this signature to return the errors / output from above.
A: Disclaimer: I know nothing about SoX. It might be that the code is structured to make this easy, or it might be more hard. Either way, the process is the same:
First you want to find the functions in the SoX application that you want to call. Most likely the console app has code to parse the command line and call the appropriate functions. So first off, find the functions you want to use.
Next, check out the info on exporting functions in DLLs from C at this site: Creating And Using DLLs
Then make a new makefile or visual studio project file with the target being a DLL, and add the sourcefiles from the SoX program that you have modified to be exported.
A: You don't mention what your toolchain is, but if you configure gcc in Windows, you can use the normal config;make;make install to just compile sox. In the process, it will create a dll file, and the console app. Or, you can just specify the make target to only make the dll. This will compile a windows native library that only depends on the MS C runtime dll, and you can use this in your own app.
A: You can execute a console application and capture its output using pipes. You use une side of the pipe as stdout for the CreateProcess and you read from the other side like a common file.
You can see a working example written in delphi here: http://delphi.about.com/cs/adptips2001/a/bltip0201_2.htm
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Can you Catch an Exception after the main application unit has ended? In one of our application im getting an exception that i can not seem to find or trap.
...
Application.CreateForm(TFrmMain, FrmMain);
outputdebugstring(pansichar('Application Run')); //this is printed
Application.Run;
outputdebugstring(pansichar('Application Run After')); //this is printed
end.
<--- The Exception seems to be here
The Event log shows
> ODS: Application Run
> //Various Application Messages
> ODS: Application Run After
> First Change Exception at $xxxxxxxx. ...etc
All i can think of is it is the finalization code of one of the units.
(Delphi 7)
A: Try installing MadExcept - it should catch the exception and give you a stack-trace.
It helped me when I had a similar issue.
A: Here's two things you can try:
1) Quick and easy is to to hit 'F7' on the final 'end.'. This will step you into the other finalization blocks.
2) Try overriding the Application.OnException Event.
A: The SysUtils unit actually sets up the default ErrorProc and ExceptProc procedures in its initialization section, and undoes them in its finalization section, so often in this situation it's worth ensuring that SysUtils is the very first unit in the uses clause in your dpr, so then it will be the last one finalised. Might be enough to get you some meaningful data about what is going wrong.
A: Finalization exceptions are tricky. Even if you put SysUtls first in your project file, your application object may already be gone, which means your global exception handler is gone too. MadExcept may work for this though.
Another solution is to put a Try / Except block in each of your unit finalization sections, and then handle the exceptions there.
What is your objective? Do you want to suppress the exception or debug it? Debugging it can be done by stepping through them with F7 as Zartog suggested. If you discover which unit has the exception in finalization then you might try placing it in a different order in the uses clause it is called from.
Good luck!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: In sharepoint designer's workflow editor how do I get the workflow initiators username? In Sharepoint designer's workflow editor I wish to retrieve the username/name of the work flow initiator (i.e. who kicked it off or triggered the workflow) - this is relatively easy to do using 3rd party products such as Nintex Workflow 2007 (where I would use something like {Common:Initiator}) - but I can't seem to find any way out of the box to do this using share point designer and MOSS 2007.
Update
It does not look like this rather obvious feature is supported OOTB, so I ended up writing a custom activity (as suggested by one of the answers). I have listed the activities code here for reference though I suspect there are probably a few instances of this floating around out there on blogs as it's a pretty trivial solution:
public partial class LookupInitiatorInfo : Activity
{
public static DependencyProperty __ActivationPropertiesProperty =
DependencyProperty.Register("__ActivationProperties",
typeof(Microsoft.SharePoint.Workflow.SPWorkflowActivationProperties),
typeof(LookupInitiatorInfo));
public static DependencyProperty __ContextProperty =
DependencyProperty.Register("__Context", typeof (WorkflowContext),
typeof (LookupInitiatorInfo));
public static DependencyProperty PropertyValueVariableProperty =
DependencyProperty.Register("PropertyValueVariable", typeof (string),
typeof(LookupInitiatorInfo));
public static DependencyProperty UserPropertyProperty =
DependencyProperty.Register("UserProperty", typeof (string),
typeof (LookupInitiatorInfo));
public LookupInitiatorInfo()
{
InitializeComponent();
}
[Description("ActivationProperties")]
[ValidationOption(ValidationOption.Required)]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public Microsoft.SharePoint.Workflow.SPWorkflowActivationProperties __ActivationProperties
{
get { return ((Microsoft.SharePoint.Workflow.SPWorkflowActivationProperties)(base.GetValue(__ActivationPropertiesProperty))); }
set { base.SetValue(__ActivationPropertiesProperty, value); }
}
[Description("Context")]
[ValidationOption(ValidationOption.Required)]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public WorkflowContext __Context
{
get { return ((WorkflowContext)(base.GetValue(__ContextProperty))); }
set { base.SetValue(__ContextProperty, value); }
}
[Description("UserProperty")]
[ValidationOption(ValidationOption.Required)]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public string UserProperty
{
get { return ((string) (base.GetValue(UserPropertyProperty))); }
set { base.SetValue(UserPropertyProperty, value); }
}
[Description("PropertyValueVariable")]
[ValidationOption(ValidationOption.Required)]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public string PropertyValueVariable
{
get { return ((string) (base.GetValue(PropertyValueVariableProperty))); }
set { base.SetValue(PropertyValueVariableProperty, value); }
}
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
{
// value values for the UserProperty (in most cases you
// would use LoginName or Name)
//Sid
//ID
//LoginName
//Name
//IsDomainGroup
//Email
//RawSid
//Notes
try
{
string err = string.Empty;
if (__ActivationProperties == null)
{
err = "__ActivationProperties was null";
}
else
{
SPUser user = __ActivationProperties.OriginatorUser;
if (user != null && UserProperty != null)
{
PropertyInfo property = typeof (SPUser).GetProperty(UserProperty);
if (property != null)
{
object value = property.GetValue(user, null);
PropertyValueVariable = (value != null) ? value.ToString() : "";
}
else
{
err = string.Format("no property found with the name \"{0}\"", UserProperty);
}
}
else
{
err = "__ActivationProperties.OriginatorUser was null";
}
}
if (!string.IsNullOrEmpty(err))
Common.LogExceptionToWorkflowHistory(new ArgumentOutOfRangeException(err), executionContext,
WorkflowInstanceId);
}
catch (Exception e)
{
Common.LogExceptionToWorkflowHistory(e, executionContext, WorkflowInstanceId);
}
return ActivityExecutionStatus.Closed;
}
}
And then wire it up with the following .action xml file:
<?xml version="1.0" encoding="utf-8"?>
<WorkflowInfo Language="en-us">
<Actions>
<Action Name="Lookup initiator user property"
ClassName="XXX.ActivityLibrary.LookupInitiatorInfo"
Assembly="XXX.ActivityLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=XXX"
AppliesTo="all"
Category="WormaldWorkflow Custom Actions">
<RuleDesigner Sentence="Lookup initating users property named %1 and store in %2">
<FieldBind Field="UserProperty" DesignerType="TextArea" Id="1" Text="LoginName" />
<FieldBind Field="PropertyValueVariable" DesignerType="ParameterNames" Text="variable" Id="2"/>
</RuleDesigner>
<Parameters>
<Parameter Name="__Context" Type="Microsoft.Sharepoint.WorkflowActions.WorkflowContext, Microsoft.SharePoint.WorkflowActions" Direction="In"/>
<Parameter Name="__ActivationProperties" Type="Microsoft.SharePoint.Workflow.SPWorkflowActivationProperties, Microsoft.SharePoint" Direction="In"/>
<Parameter Name="UserProperty" Type="System.String, mscorlib" Direction="In" />
<Parameter Name="PropertyValueVariable" Type="System.String, mscorlib" Direction="Out" />
</Parameters>
</Action>
</Actions>
</WorkflowInfo>
A: For those that google into this article and are now using SharePoint 2010, the workflow initiator variable is now supported OOTB in SharePoint Designer.
The datasource would be "Workflow Context" and the field is, of course, "Initiator" and you can choose to return it as the "Display Name", "Email", "Login Name" or the "User ID Number"
A: I don't think this is possible to do in SharePoint Designer out of the box. You could probably write a custom action to get the originator, but I don't believe it is exposed through the SPD workflow interface at all.
The best you could probably do is get the user who created or modified the item in the list, but this wouldn't handle cases where the workflow was manually run.
A: I can think about a simple but not very sophisticated solution for this one by using just SPD. Just in workflow steps create a test item in a secondary list (probably a task list which stores the workflowId and itemId properties for refrence back) and then do a lookup in your workflow on that list to see who is the creator of that item, that value would be the current workflow initiator.
A: The custom activity solution only work if you are working with moss, if you only have wss 3.0 you can put one step more in your workflow and set a custom comment field with any information, this make the last modified person to change and become the same as the workflow initiator, then you can use the ModifiedBy field to make any decision that you need.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Why use iterators instead of array indices? Take the following two lines of code:
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
And this:
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
I'm told that the second way is preferred. Why exactly is this?
A: STL iterators are mostly there so that the STL algorithms like sort can be container independent.
If you just want to loop over all the entries in a vector just use the index loop style.
It is less typing and easier to parse for most humans. It would be nice if C++ had a simple foreach loop without going overboard with template magic.
for( size_t i = 0; i < some_vector.size(); ++i )
{
T& rT = some_vector[i];
// now do something with rT
}
'
A: It's part of the modern C++ indoctrination process. Iterators are the only way to iterate most containers, so you use it even with vectors just to get yourself into the proper mindset. Seriously, that's the only reason I do it - I don't think I've ever replaced a vector with a different kind of container.
Wow, this is still getting downvoted after three weeks. I guess it doesn't pay to be a little tongue-in-cheek.
I think the array index is more readable. It matches the syntax used in other languages, and the syntax used for old-fashioned C arrays. It's also less verbose. Efficiency should be a wash if your compiler is any good, and there are hardly any cases where it matters anyway.
Even so, I still find myself using iterators frequently with vectors. I believe the iterator is an important concept, so I promote it whenever I can.
A: because you are not tying your code to the particular implementation of the some_vector list. if you use array indices, it has to be some form of array; if you use iterators you can use that code on any list implementation.
A: I don't think it makes much difference for a vector. I prefer to use an index myself as I consider it to be more readable and you can do random access like jumping forward 6 items or jumping backwards if needs be.
I also like to make a reference to the item inside the loop like this so there are not a lot of square brackets around the place:
for(size_t i = 0; i < myvector.size(); i++)
{
MyClass &item = myvector[i];
// Do stuff to "item".
}
Using an iterator can be good if you think you might need to replace the vector with a list at some point in the future and it also looks more stylish to the STL freaks but I can't think of any other reason.
A: After having learned a little more on the subject of this answer, I realize it was a bit of an oversimplification. The difference between this loop:
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
And this loop:
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
Is fairly minimal. In fact, the syntax of doing loops this way seems to be growing on me:
while (it != end){
//do stuff
++it;
}
Iterators do unlock some fairly powerful declarative features, and when combined with the STL algorithms library you can do some pretty cool things that are outside the scope of array index administrivia.
A: Imagine some_vector is implemented with a linked-list. Then requesting an item in the i-th place requires i operations to be done to traverse the list of nodes. Now, if you use iterator, generally speaking, it will make its best effort to be as efficient as possible (in the case of a linked list, it will maintain a pointer to the current node and advance it in each iteration, requiring just a single operation).
So it provides two things:
*
*Abstraction of use: you just want to iterate some elements, you don't care about how to do it
*Performance
A: The second form represents what you're doing more accurately. In your example, you don't care about the value of i, really - all you want is the next element in the iterator.
A: Indexing requires an extra mul operation. For example, for vector<int> v, the compiler converts v[i] into &v + sizeof(int) * i.
A: I'm going to be the devils advocate here, and not recommend iterators. The main reason why, is all the source code I've worked on from Desktop application development to game development have i nor have i needed to use iterators. All the time they have not been required and secondly the hidden assumptions and code mess and debugging nightmares you get with iterators make them a prime example not to use it in any applications that require speed.
Even from a maintence stand point they're a mess. Its not because of them but because of all the aliasing that happen behind the scene. How do i know that you haven't implemented your own virtual vector or array list that does something completely different to the standards. Do i know what type is currently now during runtime? Did you overload a operator I didn't have time to check all your source code. Hell do i even know what version of the STL your using?
The next problem you got with iterators is leaky abstraction, though there are numerous web sites that discuss this in detail with them.
Sorry, I have not and still have not seen any point in iterators. If they abstract the list or vector away from you, when in fact you should know already what vector or list your dealing with if you don't then your just going to be setting yourself up for some great debugging sessions in the future.
A: The first form is efficient only if vector.size() is a fast operation. This is true for vectors, but not for lists, for example. Also, what are you planning to do within the body of the loop? If you plan on accessing the elements as in
T elem = some_vector[i];
then you're making the assumption that the container has operator[](std::size_t) defined. Again, this is true for vector but not for other containers.
The use of iterators bring you closer to container independence. You're not making assumptions about random-access ability or fast size() operation, only that the container has iterator capabilities.
You could enhance your code further by using standard algorithms. Depending on what it is you're trying to achieve, you may elect to use std::for_each(), std::transform() and so on. By using a standard algorithm rather than an explicit loop you're avoiding re-inventing the wheel. Your code is likely to be more efficient (given the right algorithm is chosen), correct and reusable.
A: You might want to use an iterator if you are going to add/remove items to the vector while you are iterating over it.
some_iterator = some_vector.begin();
while (some_iterator != some_vector.end())
{
if (/* some condition */)
{
some_iterator = some_vector.erase(some_iterator);
// some_iterator now positioned at the element after the deleted element
}
else
{
if (/* some other condition */)
{
some_iterator = some_vector.insert(some_iterator, some_new_value);
// some_iterator now positioned at new element
}
++some_iterator;
}
}
If you were using indices you would have to shuffle items up/down in the array to handle the insertions and deletions.
A: During iteration you don't need to know number of item to be processed. You just need the item and iterators do such things very good.
A: No one mentioned yet that one advantage of indices is that they are not become invalid when you append to a contiguous container like std::vector, so you can add items to the container during iteration.
This is also possible with iterators, but you must call reserve(), and therefore need to know how many items you'll append.
A: If you have access to C++11 features, then you can also use a range-based for loop for iterating over your vector (or any other container) as follows:
for (auto &item : some_vector)
{
//do stuff
}
The benefit of this loop is that you can access elements of the vector directly via the item variable, without running the risk of messing up an index or making a making a mistake when dereferencing an iterator. In addition, the placeholder auto prevents you from having to repeat the type of the container elements,
which brings you even closer to a container-independent solution.
Notes:
*
*If you need the the element index in your loop and the operator[] exists for your container (and is fast enough for you), then better go for your first way.
*A range-based for loop cannot be used to add/delete elements into/from a container. If you want to do that, then better stick to the solution given by Brian Matthews.
*If you don't want to change the elements in your container, then you should use the keyword const as follows: for (auto const &item : some_vector) { ... }.
A: Separation of Concerns
It's very nice to separate the iteration code from the 'core' concern of the loop. It's almost a design decision.
Indeed, iterating by index ties you to the implementation of the container. Asking the container for a begin and end iterator, enables the loop code for use with other container types.
Also, in the std::for_each way, you TELL the collection what to do, instead of ASKing it something about its internals
The 0x standard is going to introduce closures, which will make this approach much more easy to use - have a look at the expressive power of e.g. Ruby's [1..6].each { |i| print i; }...
Performance
But maybe a much overseen issue is that, using the for_each approach yields an opportunity to have the iteration parallelized - the intel threading blocks can distribute the code block over the number of processors in the system!
Note: after discovering the algorithms library, and especially foreach, I went through two or three months of writing ridiculously small 'helper' operator structs which will drive your fellow developers crazy. After this time, I went back to a pragmatic approach - small loop bodies deserve no foreach no more :)
A must read reference on iterators is the book "Extended STL".
The GoF have a tiny little paragraph in the end of the Iterator pattern, which talks about this brand of iteration; it's called an 'internal iterator'. Have a look here, too.
A: Because it is more object-oriented. if you are iterating with an index you are assuming:
a) that those objects are ordered
b) that those objects can be obtained by an index
c) that the index increment will hit every item
d) that that index starts at zero
With an iterator, you are saying "give me everything so I can work with it" without knowing what the underlying implementation is. (In Java, there are collections that cannot be accessed through an index)
Also, with an iterator, no need to worry about going out of bounds of the array.
A: Aside from all of the other excellent answers... int may not be large enough for your vector. Instead, if you want to use indexing, use the size_type for your container:
for (std::vector<Foo>::size_type i = 0; i < myvector.size(); ++i)
{
Foo& this_foo = myvector[i];
// Do stuff with this_foo
}
A: Another nice thing about iterators is that they better allow you to express (and enforce) your const-preference. This example ensures that you will not be altering the vector in the midst of your loop:
for(std::vector<Foo>::const_iterator pos=foos.begin(); pos != foos.end(); ++pos)
{
// Foo & foo = *pos; // this won't compile
const Foo & foo = *pos; // this will compile
}
A: I probably should point out you can also call
std::for_each(some_vector.begin(), some_vector.end(), &do_stuff);
A: Several good points already. I have a few additional comments:
*
*Assuming we are talking about the C++ standard library, "vector" implies a random access container that has the guarantees of C-array (random access, contiguos memory layout etc). If you had said 'some_container', many of the above answers would have been more accurate (container independence etc).
*To eliminate any dependencies on compiler optimization, you could move some_vector.size() out of the loop in the indexed code, like so:
const size_t numElems = some_vector.size();
for (size_t i = 0; i
*Always pre-increment iterators and treat post-increments as exceptional cases.
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end(); ++some_iterator){ //do stuff }
So assuming and indexable std::vector<> like container, there is no good reason to prefer one over other, sequentially going through the container. If you have to refer to older or newer elemnent indexes frequently, then the indexed version is more appropropriate.
In general, using the iterators is preferred because algorithms make use of them and behavior can be controlled (and implicitly documented) by changing the type of the iterator. Array locations can be used in place of iterators, but the syntactical difference will stick out.
A: I don't use iterators for the same reason I dislike foreach-statements. When having multiple inner-loops it's hard enough to keep track of global/member variables without having to remember all the local values and iterator-names as well. What I find useful is to use two sets of indices for different occasions:
for(int i=0;i<anims.size();i++)
for(int j=0;j<bones.size();j++)
{
int animIndex = i;
int boneIndex = j;
// in relatively short code I use indices i and j
... animation_matrices[i][j] ...
// in long and complicated code I use indices animIndex and boneIndex
... animation_matrices[animIndex][boneIndex] ...
}
I don't even want to abbreviate things like "animation_matrices[i]" to some random "anim_matrix"-named-iterator for example, because then you can't see clearly from which array this value is originated.
A: *
*If you like being close to the metal / don't trust their implementation details, don't use iterators.
*If you regularly switch out one collection type for another during development, use iterators.
*If you find it difficult to remember how to iterate different sorts of collections (maybe you have several types from several different external sources in use), use iterators to unify the means by which you walk over elements. This applies to say switching a linked list with an array list.
Really, that's all there is to it. It's not as if you're going to gain more brevity either way on average, and if brevity really is your goal, you can always fall back on macros.
A: Even better than "telling the CPU what to do" (imperative) is "telling the libraries what you want" (functional).
So instead of using loops you should learn the algorithms present in stl.
A: I always use array index because many application of mine require something like "display thumbnail image". So I wrote something like this:
some_vector[0].left=0;
some_vector[0].top =0;<br>
for (int i = 1; i < some_vector.size(); i++)
{
some_vector[i].left = some_vector[i-1].width + some_vector[i-1].left;
if(i % 6 ==0)
{
some_vector[i].top = some_vector[i].top.height + some_vector[i].top;
some_vector[i].left = 0;
}
}
A: For container independence
A: Both the implementations are correct, but I would prefer the 'for' loop. As we have decided to use a Vector and not any other container, using indexes would be the best option. Using iterators with Vectors would lose the very benefit of having the objects in continuous memory blocks which help ease in their access.
A: I felt that none of the answers here explain why I like iterators as a general concept over indexing into containers. Note that most of my experience using iterators doesn't actually come from C++ but from higher-level programming languages like Python.
The iterator interface imposes fewer requirements on consumers of your function, which allows consumers to do more with it.
If all you need is to be able to forward-iterate, the developer isn't limited to using indexable containers - they can use any class implementing operator++(T&), operator*(T) and operator!=(const &T, const &T).
#include <iostream>
template <class InputIterator>
void printAll(InputIterator& begin, InputIterator& end)
{
for (auto current = begin; current != end; ++current) {
std::cout << *current << "\n";
}
}
// elsewhere...
printAll(myVector.begin(), myVector.end());
Your algorithm works for the case you need it - iterating over a vector - but it can also be useful for applications you don't necessarily anticipate:
#include <random>
class RandomIterator
{
private:
std::mt19937 random;
std::uint_fast32_t current;
std::uint_fast32_t floor;
std::uint_fast32_t ceil;
public:
RandomIterator(
std::uint_fast32_t floor = 0,
std::uint_fast32_t ceil = UINT_FAST32_MAX,
std::uint_fast32_t seed = std::mt19937::default_seed
) :
floor(floor),
ceil(ceil)
{
random.seed(seed);
++(*this);
}
RandomIterator& operator++()
{
current = floor + (random() % (ceil - floor));
}
std::uint_fast32_t operator*() const
{
return current;
}
bool operator!=(const RandomIterator &that) const
{
return current != that.current;
}
};
int main()
{
// roll a 1d6 until we get a 6 and print the results
RandomIterator firstRandom(1, 7, std::random_device()());
RandomIterator secondRandom(6, 7);
printAll(firstRandom, secondRandom);
return 0;
}
Attempting to implement a square-brackets operator which does something similar to this iterator would be contrived, while the iterator implementation is relatively simple. The square-brackets operator also makes implications about the capabilities of your class - that you can index to any arbitrary point - which may be difficult or inefficient to implement.
Iterators also lend themselves to decoration. People can write iterators which take an iterator in their constructor and extend its functionality:
template<class InputIterator, typename T>
class FilterIterator
{
private:
InputIterator internalIterator;
public:
FilterIterator(const InputIterator &iterator):
internalIterator(iterator)
{
}
virtual bool condition(T) = 0;
FilterIterator<InputIterator, T>& operator++()
{
do {
++(internalIterator);
} while (!condition(*internalIterator));
return *this;
}
T operator*()
{
// Needed for the first result
if (!condition(*internalIterator))
++(*this);
return *internalIterator;
}
virtual bool operator!=(const FilterIterator& that) const
{
return internalIterator != that.internalIterator;
}
};
template <class InputIterator>
class EvenIterator : public FilterIterator<InputIterator, std::uint_fast32_t>
{
public:
EvenIterator(const InputIterator &internalIterator) :
FilterIterator<InputIterator, std::uint_fast32_t>(internalIterator)
{
}
bool condition(std::uint_fast32_t n)
{
return !(n % 2);
}
};
int main()
{
// Rolls a d20 until a 20 is rolled and discards odd rolls
EvenIterator<RandomIterator> firstRandom(RandomIterator(1, 21, std::random_device()()));
EvenIterator<RandomIterator> secondRandom(RandomIterator(20, 21));
printAll(firstRandom, secondRandom);
return 0;
}
While these toys might seem mundane, it's not difficult to imagine using iterators and iterator decorators to do powerful things with a simple interface - decorating a forward-only iterator of database results with an iterator which constructs a model object from a single result, for example. These patterns enable memory-efficient iteration of infinite sets and, with a filter like the one I wrote above, potentially lazy evaluation of results.
Part of the power of C++ templates is your iterator interface, when applied to the likes of fixed-length C arrays, decays to simple and efficient pointer arithmetic, making it a truly zero-cost abstraction.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "279"
}
|
Q: "Hidden Secrets" of the Visual Studio .NET debugger? As much as I generally don't like the discussion/subjective posts on SO, I have really come to appreciate the "Hidden Secrets" set of posts that people have put together. They provide a great overview of some commonly missed tools that you might now otherwise discover.
For this question I would like to explore the Visual Studio .NET debugger. What are some of the "hidden secrets" in the VS.NET debugger that you use often or recently discovered and wish you would have known long ago?
A: *
*The threads window, from Debug -> Windows -> Threads. You can Freeze and Thaw threads, and switch the active thread. This is awesome when debugging or replicating an issue with a multithreading application.
*You can drag & drop the yellow "Next Statement" arrow to another place. When the program resumes, it will resume execution at that statement. You can add it to the toolbar, a blue arrow called Set Next Statement, but it's not there by default.
*You can "undo" the navigation you did, like scrolling, going to another file, or jumping to a reference. The shortcut is ctrl-- (control minus.) That way you can jump into a function, examine the code there, and go back to where you were without looking.
A: Conditional breakpoints.
A: You can load windbg extensions into the Visual Studio debugger and use them from the immediate window.
A: As posted in another post Sara Ford is doing a current series on the VS debugger.
Her blog is the best source of VS tips: http://blogs.msdn.com/saraford/archive/tags/Visual+Studio+2008+Tip+of+the+Day/default.aspx
A: This is kind of an old one. If you add a watch expression err,hr, then this will hold the result of GetLastError(), formatted as an HRESULT (VC++ debugger only).
A: You can drag current line cursor (yellow arrow) up and down your code when execution is paused.
Additionally, in order to enable this during pause on exception you have to click "enable editing" on exception details first.
You can also make VS break on handled exceptions by checking one's of interest under:
Debug->Exceptions : Thrown column
A: Some useful shortcut keys.
*
*F11 to step into a method.
*Shift-F11 to step out of a method.
*F10 to step over a method.
A: Things I use often:
*
*Click the menu item "Debug | Exceptions" (or Ctrl-D, E for short) and you can enable breaking at the time that any exception is thrown, or choose to not break on certain exceptions.
*You can set up the debugger to download some of the framework source code and symbols from a MS server and step into the framework code. (Some libraries, like System.ServiceModel, are not yet available). It in the Options windows under Debugging. See MSDN How-To.
*You can use the VS.NET debugger to debug Javascript running in IE. You just need to install the IE javascript debugger, and enable javascript debugging in IE's settings. Then on a JS error it will pop up a "do you want to debug" dialog box, and you can choose to debug in VS.NET.
A: One of my favorite features is the "When Hit..." option available on a breakpoint. You can print a message with the value of a variable along with lots of other information, such as:
*
*$ADDRESS - Current Instruction
*$CALLER - Previous Function Name
*$CALLSTACK - Call Stack
*$FUNCTION - Current Function Name
*$PID - Process ID
*$PNAME - Process Name
*$TID - Thread ID
*$TNAME - Thread Name
You can also have it run a macro, but I've never used that feature.
A: For .net applications System.Diagnostics has lots of useful debugging things. The Debugger class for example:
Debugger.Break(); // Programmatically set a break point
Debugger.Launch(); // Launch the debugger if not already attached
Debugger.IsAttached // Check if the debugger is attached
System.Diagnostics also has lots of good attributes. The two I've used are the debugger display attribute for changing the details put into the locals window and the step through attribute for skipping code you don't care about debugging:
// Displays the value of Property1 for any "MyClass" instance in the debugger
[DebuggerDisplay("{Property1}")]
public class MyClass {
public string Property1 { get; set; }
[DebuggerStepThrough]
public void DontStepInto() {
// An action we don't want to debug
}
}
A: You can right-click an object in the Watch window and click Make Object ID.
It will assign that instance an ID number, allowing you to see, in a complicated object graph, which objects refer to the same instance.
A: As a web developer who works with Web Services that are within the same solution as my front-end code most of the time, I found the ability to "attach" to a process to be a HUGE time saver.
Before I found this hidden gem, I would always have to set a breakpoint on some front-end code that called a web service method and step into it. Now that I know about this trick/feature I can easily set breakpoints on any part of my code that I want to which saves me loads of time and effort.
A: $exception in the watch window will show the exception that is currently being processed even if you don't have a catch that assign the Exception instance to a named variable.
A: You can open and place a breakpoint in a source file if the file belongs to another solution (external file). The debugger can still hit the breakpoint. No need to open another Visual Studio instance to debug the external file. Helpful in debugging web services which you source to. This works as long as all the sources are current and compiled.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Q: How can I measure the actual memory usage of an application or process? How do you measure the memory usage of an application or process in Linux?
From the blog article of Understanding memory usage on Linux, ps is not an accurate tool to use for this intent.
Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong.
(Note: This question is covered here in great detail.)
A: Use time.
Not the Bash builtin time, but the one you can find with which time, for example /usr/bin/time.
Here's what it covers, on a simple ls:
$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
A: #!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#
IFS=$'\n'
for line in $(</proc/$1/smaps)
do
[[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done
print $kb
A: I'm using htop; it's a very good console program similar to Windows Task Manager.
A: Get Valgrind. Give it your program to run, and it'll tell you plenty about its memory usage.
This would apply only for the case of a program that runs for some time and stops. I don't know if Valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.
A: A good test of the more "real world" usage is to open the application, run vmstat -s, and check the "active memory" statistic. Close the application, wait a few seconds, and run vmstat -s again.
However much active memory was freed was in evidently in use by the application.
A: The below command line will give you the total memory used by the various process running on the Linux machine in MB:
ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'
A: If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.
gcore <pid>
Check the size of the generated core file to get a good idea how much memory a particular process is using.
This won't work too well if process is using hundreds of megabytes, or gigabytes, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.
Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.
A: Note: this works 100% well only when memory consumption increases
If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:
while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;
this will continuously look for changes and print them.
A: Beside the solutions listed in the answers, you can use the Linux command "top". It provides a dynamic real-time view of the running system, and it gives the CPU and memory usage for the whole system, along with for every program, in percentage:
top
to filter by a program PID:
top -p <PID>
To filter by a program name:
top | grep <PROCESS NAME>
"top" provides also some fields such as:
VIRT -- Virtual Image (kb): The total amount of virtual memory used by the task
RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.
DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.
SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.
Reference here.
A: With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:
*
*does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it
*can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries
If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
As explained in the Valgrind documentation, you need to run the program through Valgrind:
valgrind --tool=massif <executable> <arguments>
Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with Valgrind, to be of great help already.
To find memory leaks, use the (default) memcheck tool of valgrind.
A: This is an excellent summary of the tools and problems: archive.org link
I'll quote it, so that more devs will actually read it.
If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.
If you want to detect leaks, use valgrind or possibly kmtrace.
If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.
A: If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.
A: I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.
A: Try the pmap command:
sudo pmap -x <process pid>
A: Another vote for Valgrind here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by Valgrind.
I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)
A: Check out this shell script to check memory usage by application in Linux.
It is also available on GitHub and in a version without paste and bc.
A: Given some of the answers (thanks thomasrutter), to get the actual swap and RAM for a single application, I came up with the following, say we want to know what 'firefox' is using
sudo smem | awk '/firefox/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
Or for libvirt;
sudo smem | awk '/libvirt/{swap += $5; pss += $7;} END {print "swap = "swap/1024" PSS = "pss/1024}'
This will give you the total in MB like so;
swap = 0 PSS = 2096.92
swap = 224.75 PSS = 421.455
Tested on ubuntu 16.04 through 20.04.
A: I am using Arch Linux and there's this wonderful package called ps_mem:
ps_mem -p <pid>
Example Output
$ ps_mem -S -p $(pgrep firefox)
Private + Shared = RAM used Swap used Program
355.0 MiB + 38.7 MiB = 393.7 MiB 35.9 MiB firefox
---------------------------------------------
393.7 MiB 35.9 MiB
=============================================
A: There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under Linux use shared libraries.
For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' (if you could isolate it)? How about libc? Or all these other libraries that are required to run 'ls'?
linux-gate.so.1 => (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)
You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.
Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough information of the memory usage at a single time. On the other hand, Valgrind will give you a better profile of the memory usage throughout the lifetime of the program.
A: It is hard to tell for sure, but here are two "close" things that can help.
$ ps aux
will give you Virtual Size (VSZ)
You can also get detailed statistics from the /proc file-system by going to /proc/$pid/status.
The most important is the VmSize, which should be close to what ps aux gives.
/proc/19420$ cat status
Name: firefox
State: S (sleeping)
Tgid: 19420
Pid: 19420
PPid: 1
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000
VmPeak: 222956 kB
VmSize: 212520 kB
VmLck: 0 kB
VmHWM: 127912 kB
VmRSS: 118768 kB
VmData: 170180 kB
VmStk: 228 kB
VmExe: 28 kB
VmLib: 35424 kB
VmPTE: 184 kB
Threads: 8
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 03
Mems_allowed: 1
voluntary_ctxt_switches: 63422
nonvoluntary_ctxt_switches: 7171
A: While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides Valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.
A: If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.
Not all platforms support this though and will return 0 values for the memory-use options.
Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).
This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.
A: In recent versions of Linux, use the smaps subsystem. For example, for a process with a PID of 1234:
cat /proc/1234/smaps
It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.
A: ps -eo size,pid,user,command --sort -size | \
awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |\
cut -d "" -f2 | cut -d "-" -f1
Use this as root and you can get a clear output for memory usage by each process.
Output example:
0.00 Mb COMMAND
1288.57 Mb /usr/lib/firefox
821.68 Mb /usr/lib/chromium/chromium
762.82 Mb /usr/lib/chromium/chromium
588.36 Mb /usr/sbin/mysqld
547.55 Mb /usr/lib/chromium/chromium
523.92 Mb /usr/lib/tracker/tracker
476.59 Mb /usr/lib/chromium/chromium
446.41 Mb /usr/bin/gnome
421.62 Mb /usr/sbin/libvirtd
405.11 Mb /usr/lib/chromium/chromium
302.60 Mb /usr/lib/chromium/chromium
291.46 Mb /usr/lib/chromium/chromium
284.56 Mb /usr/lib/chromium/chromium
238.93 Mb /usr/lib/tracker/tracker
223.21 Mb /usr/lib/chromium/chromium
197.99 Mb /usr/lib/chromium/chromium
194.07 Mb conky
191.92 Mb /usr/lib/chromium/chromium
190.72 Mb /usr/bin/mongod
169.06 Mb /usr/lib/chromium/chromium
155.11 Mb /usr/bin/gnome
136.02 Mb /usr/lib/chromium/chromium
125.98 Mb /usr/lib/chromium/chromium
103.98 Mb /usr/lib/chromium/chromium
93.22 Mb /usr/lib/tracker/tracker
89.21 Mb /usr/lib/gnome
80.61 Mb /usr/bin/gnome
77.73 Mb /usr/lib/evolution/evolution
76.09 Mb /usr/lib/evolution/evolution
72.21 Mb /usr/lib/gnome
69.40 Mb /usr/lib/evolution/evolution
68.84 Mb nautilus
68.08 Mb zeitgeist
60.97 Mb /usr/lib/tracker/tracker
59.65 Mb /usr/lib/evolution/evolution
57.68 Mb apt
55.23 Mb /usr/lib/gnome
53.61 Mb /usr/lib/evolution/evolution
53.07 Mb /usr/lib/gnome
52.83 Mb /usr/lib/gnome
51.02 Mb /usr/lib/udisks2/udisksd
50.77 Mb /usr/lib/evolution/evolution
50.53 Mb /usr/lib/gnome
50.45 Mb /usr/lib/gvfs/gvfs
50.36 Mb /usr/lib/packagekit/packagekitd
50.14 Mb /usr/lib/gvfs/gvfs
48.95 Mb /usr/bin/Xwayland :1024
46.21 Mb /usr/bin/gnome
42.43 Mb /usr/bin/zeitgeist
42.29 Mb /usr/lib/gnome
41.97 Mb /usr/lib/gnome
41.64 Mb /usr/lib/gvfs/gvfsd
41.63 Mb /usr/lib/gvfs/gvfsd
41.55 Mb /usr/lib/gvfs/gvfsd
41.48 Mb /usr/lib/gvfs/gvfsd
39.87 Mb /usr/bin/python /usr/bin/chrome
37.45 Mb /usr/lib/xorg/Xorg vt2
36.62 Mb /usr/sbin/NetworkManager
35.63 Mb /usr/lib/caribou/caribou
34.79 Mb /usr/lib/tracker/tracker
33.88 Mb /usr/sbin/ModemManager
33.77 Mb /usr/lib/gnome
33.61 Mb /usr/lib/upower/upowerd
33.53 Mb /usr/sbin/gdm3
33.37 Mb /usr/lib/gvfs/gvfsd
33.36 Mb /usr/lib/gvfs/gvfs
33.23 Mb /usr/lib/gvfs/gvfs
33.15 Mb /usr/lib/at
33.15 Mb /usr/lib/at
30.03 Mb /usr/lib/colord/colord
29.62 Mb /usr/lib/apt/methods/https
28.06 Mb /usr/lib/zeitgeist/zeitgeist
27.29 Mb /usr/lib/policykit
25.55 Mb /usr/lib/gvfs/gvfs
25.55 Mb /usr/lib/gvfs/gvfs
25.23 Mb /usr/lib/accountsservice/accounts
25.18 Mb /usr/lib/gvfs/gvfsd
25.15 Mb /usr/lib/gvfs/gvfs
25.15 Mb /usr/lib/gvfs/gvfs
25.12 Mb /usr/lib/gvfs/gvfs
25.10 Mb /usr/lib/gnome
25.10 Mb /usr/lib/gnome
25.07 Mb /usr/lib/gvfs/gvfsd
24.99 Mb /usr/lib/gvfs/gvfs
23.26 Mb /usr/lib/chromium/chromium
22.09 Mb /usr/bin/pulseaudio
19.01 Mb /usr/bin/pulseaudio
18.62 Mb (sd
18.46 Mb (sd
18.30 Mb /sbin/init
18.17 Mb /usr/sbin/rsyslogd
17.50 Mb gdm
17.42 Mb gdm
17.09 Mb /usr/lib/dconf/dconf
17.09 Mb /usr/lib/at
17.06 Mb /usr/lib/gvfs/gvfsd
16.98 Mb /usr/lib/at
16.91 Mb /usr/lib/gdm3/gdm
16.86 Mb /usr/lib/gvfs/gvfsd
16.86 Mb /usr/lib/gdm3/gdm
16.85 Mb /usr/lib/dconf/dconf
16.85 Mb /usr/lib/dconf/dconf
16.73 Mb /usr/lib/rtkit/rtkit
16.69 Mb /lib/systemd/systemd
13.13 Mb /usr/lib/chromium/chromium
13.13 Mb /usr/lib/chromium/chromium
10.92 Mb anydesk
8.54 Mb /sbin/lvmetad
7.43 Mb /usr/sbin/apache2
6.82 Mb /usr/sbin/apache2
6.77 Mb /usr/sbin/apache2
6.73 Mb /usr/sbin/apache2
6.66 Mb /usr/sbin/apache2
6.64 Mb /usr/sbin/apache2
6.63 Mb /usr/sbin/apache2
6.62 Mb /usr/sbin/apache2
6.51 Mb /usr/sbin/apache2
6.25 Mb /usr/sbin/apache2
6.22 Mb /usr/sbin/apache2
3.92 Mb bash
3.14 Mb bash
2.97 Mb bash
2.95 Mb bash
2.93 Mb bash
2.91 Mb bash
2.86 Mb bash
2.86 Mb bash
2.86 Mb bash
2.84 Mb bash
2.84 Mb bash
2.45 Mb /lib/systemd/systemd
2.30 Mb (sd
2.28 Mb /usr/bin/dbus
1.84 Mb /usr/bin/dbus
1.46 Mb ps
1.21 Mb openvpn hackthebox.ovpn
1.16 Mb /sbin/dhclient
1.16 Mb /sbin/dhclient
1.09 Mb /lib/systemd/systemd
0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data
0.97 Mb /lib/systemd/systemd
0.96 Mb /lib/systemd/systemd
0.89 Mb /usr/sbin/smartd
0.77 Mb /usr/bin/dbus
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.76 Mb su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.75 Mb sudo su
0.74 Mb /usr/bin/dbus
0.71 Mb /usr/lib/apt/methods/http
0.68 Mb /bin/bash /usr/bin/mysqld_safe
0.68 Mb /sbin/wpa_supplicant
0.66 Mb /usr/bin/dbus
0.61 Mb /lib/systemd/systemd
0.54 Mb /usr/bin/dbus
0.46 Mb /usr/sbin/cron
0.45 Mb /usr/sbin/irqbalance
0.43 Mb logger
0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }
0.40 Mb /usr/bin/ssh
0.34 Mb /usr/lib/chromium/chrome
0.32 Mb cut
0.32 Mb cut
0.00 Mb [kthreadd]
0.00 Mb [ksoftirqd/0]
0.00 Mb [kworker/0:0H]
0.00 Mb [rcu_sched]
0.00 Mb [rcu_bh]
0.00 Mb [migration/0]
0.00 Mb [lru
0.00 Mb [watchdog/0]
0.00 Mb [cpuhp/0]
0.00 Mb [cpuhp/1]
0.00 Mb [watchdog/1]
0.00 Mb [migration/1]
0.00 Mb [ksoftirqd/1]
0.00 Mb [kworker/1:0H]
0.00 Mb [cpuhp/2]
0.00 Mb [watchdog/2]
0.00 Mb [migration/2]
0.00 Mb [ksoftirqd/2]
0.00 Mb [kworker/2:0H]
0.00 Mb [cpuhp/3]
0.00 Mb [watchdog/3]
0.00 Mb [migration/3]
0.00 Mb [ksoftirqd/3]
0.00 Mb [kworker/3:0H]
0.00 Mb [kdevtmpfs]
0.00 Mb [netns]
0.00 Mb [khungtaskd]
0.00 Mb [oom_reaper]
0.00 Mb [writeback]
0.00 Mb [kcompactd0]
0.00 Mb [ksmd]
0.00 Mb [khugepaged]
0.00 Mb [crypto]
0.00 Mb [kintegrityd]
0.00 Mb [bioset]
0.00 Mb [kblockd]
0.00 Mb [devfreq_wq]
0.00 Mb [watchdogd]
0.00 Mb [kswapd0]
0.00 Mb [vmstat]
0.00 Mb [kthrotld]
0.00 Mb [ipv6_addrconf]
0.00 Mb [acpi_thermal_pm]
0.00 Mb [ata_sff]
0.00 Mb [scsi_eh_0]
0.00 Mb [scsi_tmf_0]
0.00 Mb [scsi_eh_1]
0.00 Mb [scsi_tmf_1]
0.00 Mb [scsi_eh_2]
0.00 Mb [scsi_tmf_2]
0.00 Mb [scsi_eh_3]
0.00 Mb [scsi_tmf_3]
0.00 Mb [scsi_eh_4]
0.00 Mb [scsi_tmf_4]
0.00 Mb [scsi_eh_5]
0.00 Mb [scsi_tmf_5]
0.00 Mb [bioset]
0.00 Mb [kworker/1:1H]
0.00 Mb [kworker/3:1H]
0.00 Mb [kworker/0:1H]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [kdmflush]
0.00 Mb [bioset]
0.00 Mb [jbd2/sda5
0.00 Mb [ext4
0.00 Mb [kworker/2:1H]
0.00 Mb [kauditd]
0.00 Mb [bioset]
0.00 Mb [drbd
0.00 Mb [irq/27
0.00 Mb [i915/signal:0]
0.00 Mb [i915/signal:1]
0.00 Mb [i915/signal:2]
0.00 Mb [ttm_swap]
0.00 Mb [cfg80211]
0.00 Mb [kworker/u17:0]
0.00 Mb [hci0]
0.00 Mb [hci0]
0.00 Mb [kworker/u17:1]
0.00 Mb [iprt
0.00 Mb [iprt
0.00 Mb [kworker/1:0]
0.00 Mb [kworker/3:0]
0.00 Mb [kworker/0:0]
0.00 Mb [kworker/2:0]
0.00 Mb [kworker/u16:0]
0.00 Mb [kworker/u16:2]
0.00 Mb [kworker/3:2]
0.00 Mb [kworker/2:1]
0.00 Mb [kworker/1:2]
0.00 Mb [kworker/0:2]
0.00 Mb [kworker/2:2]
0.00 Mb [kworker/0:1]
0.00 Mb [scsi_eh_6]
0.00 Mb [scsi_tmf_6]
0.00 Mb [usb
0.00 Mb [bioset]
0.00 Mb [kworker/3:1]
0.00 Mb [kworker/u16:1]
A: Use smem, which is an alternative to ps which calculates the USS and PSS per process. You probably want the PSS.
*
*USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but it is helpful when you want to ignore shared memory.
*PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.
How this compares to RSS as reported by ps and other utilities:
*
*RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP (FastCGI/FPM) processes.
Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the Python and Matplotlib recommended dependency.
A: There isn't any easy way to calculate this. But some people have tried to get some good answers:
*
*ps_mem.py
*ps_mem.py at GitHub
A: Valgrind can show detailed information, but it slows down the target application significantly, and most of the time it changes the behavior of the application.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.
I assume what everyone wants to know with respect to "memory usage" is the following...
In Linux, the amount of physical memory a single process might use can be roughly divided into following categories.
*
*M.a anonymous mapped memory
*.p private
*
*.d dirty == malloc/mmapped heap and stack allocated and written memory
*.c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
*.s shared
*
*.d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
*.c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
*M.n named mapped memory
*.p private
*
*.d dirty == file mmapped written memory private
*.c clean == mapped program/library text private mapped
*.s shared
*
*.d dirty == file mmapped written memory shared
*.c clean == mapped library text shared mapped
Utility included in Android called showmap is quite useful
virtual shared shared private private
size RSS PSS clean dirty clean dirty object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
4 0 0 0 0 0 0 0:00 0 [vsyscall]
4 4 0 4 0 0 0 [vdso]
88 28 28 0 0 4 24 [stack]
12 12 12 0 0 0 12 7909 /lib/ld-2.11.1.so
12 4 4 0 0 0 4 89529 /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
28 0 0 0 0 0 0 86661 /usr/lib/gconv/gconv-modules.cache
4 0 0 0 0 0 0 87660 /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
4 0 0 0 0 0 0 89528 /usr/lib/locale/en_US.utf8/LC_TELEPHONE
4 0 0 0 0 0 0 89527 /usr/lib/locale/en_US.utf8/LC_ADDRESS
4 0 0 0 0 0 0 87717 /usr/lib/locale/en_US.utf8/LC_NAME
4 0 0 0 0 0 0 87873 /usr/lib/locale/en_US.utf8/LC_PAPER
4 0 0 0 0 0 0 13879 /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
4 0 0 0 0 0 0 89526 /usr/lib/locale/en_US.utf8/LC_MONETARY
4 0 0 0 0 0 0 89525 /usr/lib/locale/en_US.utf8/LC_TIME
4 0 0 0 0 0 0 11378 /usr/lib/locale/en_US.utf8/LC_NUMERIC
1156 8 8 0 0 4 4 11372 /usr/lib/locale/en_US.utf8/LC_COLLATE
252 0 0 0 0 0 0 11321 /usr/lib/locale/en_US.utf8/LC_CTYPE
128 52 1 52 0 0 0 7909 /lib/ld-2.11.1.so
2316 32 11 24 0 0 8 7986 /lib/libncurses.so.5.7
2064 8 4 4 0 0 4 7947 /lib/libdl-2.11.1.so
3596 472 46 440 0 4 28 7933 /lib/libc-2.11.1.so
2084 4 0 4 0 0 0 7995 /lib/libnss_compat-2.11.1.so
2152 4 0 4 0 0 0 7993 /lib/libnsl-2.11.1.so
2092 0 0 0 0 0 0 8009 /lib/libnss_nis-2.11.1.so
2100 0 0 0 0 0 0 7999 /lib/libnss_files-2.11.1.so
3752 2736 2736 0 0 864 1872 [heap]
24 24 24 0 0 0 24 [anon]
916 616 131 584 0 0 32 /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
22816 4004 3005 1116 0 876 2012 TOTAL
A: Three more methods to try:
*
*ps aux --sort pmem
It sorts the output by %MEM.
*ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
It sorts using pipes.
*top -a
It starts top sorting by %MEM
(Extracted from here)
A: Based on an answer to a related question.
You may use SNMP to get the memory and CPU usage of a process in a particular device on the network :)
Requirements:
*
*The device running the process should have snmp installed and running
*snmp should be configured to accept requests from where you will run the script below (it may be configured in file snmpd.conf)
*You should know the process ID (PID) of the process you want to monitor
Notes:
*
*HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.
*HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.
Process monitoring script
echo "IP address: "
read ip
echo "Specfiy PID: "
read pid
echo "Interval in seconds: "
read interval
while [ 1 ]
do
date
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
sleep $interval;
done
A: /prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only counts those which have been touched.
A: Use the in-built System Monitor GUI tool available in Ubuntu.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "858"
}
|
Q: 2d Data Structure in C# I'm looking for resources that can help me determine which approach to use in creating a 2d data structure with C#.
A: Do you mean multidimensional array? It's simple:
<type>[,] <name> = new <type>[<first dimenison>,<second dimension>];
Here is MSDN reference:
Multidimensional Arrays (C#)
A: @Traumapony-- I'd actually state that the real performance gain is made in one giant flat array, but that may just be my C++ image processing roots showing.
It depends on what you need the 2D structure to do. If it's storing something where each set of items in the second dimension is the same size, then you want to use something like a large 1D array, because the seek times are faster and the data management is easier. Like:
for (y = 0; y < ysize; y++){
for (x = 0; x < xsize; x++){
theArray[y*xsize + x] = //some stuff!
}
}
And then you can do operations which ignore neighboring pixels with a single passthrough:
totalsize = xsize*ysize;
for (x = 0; x < totalsize; x++){
theArray[x] = //some stuff!
}
Except that in C# you probably want to actually call a C++ library to do this kind of processing; C++ tends to be faster for this, especially if you use the intel compiler.
If you have the second dimension having multiple different sizes, then nothing I said applies, and you should look at some of the other solutions. You really need to know what your functional requirements are in order to be able to answer the question.
A: Depending on the type of the data, you could look at using a straight 2 dimensional array:
int[][] intGrid;
If you need to get tricky, you could always go the generics approach:
Dictionary<KeyValuePair<int,int>,string>;
That allows you to put complex types in the value part of the dictionary, although makes indexing into the elements more difficult.
If you're looking to store spatial 2d point data, System.Drawing has a lot of support for points in 2d space.
A: For performance, it's best not to use multi-dimensional arrays ([,]); instead, use jagged arrays. e.g.:
<type>[][] <name> = new <type>[<first dimension>];
for (int i = 0; i < <first dimension>; i++)
{
<name>[i] = new <type>[<second dimension>];
}
To access:
<type> item = <name>[<first index>][<second index>];
A: Data Structures in C#
Seriously, I'm not trying to be critical of the question, but I got tons of useful results right at the top of my search when I Googled for:
data structures c#
If you have specific questions about specific data structures, we might have more specific answers...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Style - When to serialize a Django model Instance: signals vs model's save method I plan to serialize a Django model to XML when it's saved or updated. (The XML's going to be imported into a flash movie). Is it better to listen for a post_save() or pre_save() signal and then perform the serialization, or to just handle it in the model's save() methon
A: If it's core functionality for saving the model you'll want it as part of the save method. However, if you already have a functioning model and you want to extend it for other purposes then signals are your best bet since they allow for properly decoupled modules.
A good example might be that you want to add event logging to your site, so you simply listen for the signals that signify an event rather than modifying the original site code.
post_save() is usually best because it means the model has been successfully saved, using pre_save() doesn't guarantee that the save will be successful so shouldn't be used for anything that would depend on the save being completed.
A: Post save. That way the new data (the reason for performing the serialization) is already in the database. It'll make for a much cleaner bit of code that simply takes from the database and doesn't have to worry about adding an extra value.
The other way that comes to mind is to maintain the xml file in parallel to the database. That is to say, in your save() add the data to the database, and to the xml file. This would have a much less overhead if you're dealing with huge tables.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the best way to measure "spare" CPU time on a Linux based system For some of the customers that we develop software for, we are required to "guarantee" a certain amount of spare resources (memory, disk space, CPU). Memory and disk space are simple, but CPU is a bit more difficult.
One technique that we have used is to create a process that consumes a guaranteed amount of CPU time (say 2.5 seconds every 5 seconds). We run this process at highest priority in order to guarantee that it runs and consumes all of its required CPU cycles.
If our normal applications are able to run at an acceptable level of performance and can pass all of their functionality tests while the spare time process is running as well, then we "assume" that we have met our commitment for spare CPU time.
I'm sure that there are other techniques for doing the same thing, and would like to learn about them.
A: So this may not be exactly the answer you're looking for, but if all you want to do is make sure your application doesn't exceed certain limits on resource consumption and you're running on linux you can customize /etc/security/limits.con (may be different file on your distro of choice) to force the limits on a particular user and only run the process under that user. This is of course assuming that you have that level of control on your client's production environment.
A: If I understand correctly, your concern is wether the application also runs while a given percentage of the processing power is not available.
The most incontrovertible approach is to use underpowered hardware for your testing. If the processor in your setup allows you to, you can downclock it online. The Linux kernel gives you an easy interface for doing this, see /sys/devices/system/cpu/cpu0/cpufreq/. There is also a bunch of GUI applications for this available.
If your processor isn't capable of changing clock speed online, you can do it the hard way and select a smaller multiplier in your BIOS.
I think you get the idea. If it runs on 1600 Mhz instead of 2400 Mhz, you can guarantee 33% of spare CPU time.
A: SAR is a standard *nix process that collects information about the operational use of system resources. It also has a command line tool that allows you to create various reports, and it's common for the data to be persisted in a database.
A: With a multi-core/processor system you could use Affinity to your advantage.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you organize and keep track of multiple (many) projects As a contractor, out-sourcer and shareware author,I have about 5-10 projects going on at any one time. Each project has a todo list, requirements need to be communicated to other outsources and employees, status needs to be given to clients, and developer's questions need to be answered.
Sometimes it is too much... but then I realize that I'm not very organized and there has to be a better way.
What is your better way?
How do you keep track of requirements for multiple projects, assign work to multiple developers, obtain and give status for multiple projects to multiple clients?
What tools do you use? What processes?
A: This may sound really old-tech, but a different set of notepads for each project. Now, hear me out.
I know that notepads aren't searchable, and they aren't indexed, etc. But they will have meeting dates and times (if you've been taking notes during meetings, even on the phone), they have the ability of never crashing, and they're future proof in the event of wondering what you did a few years back but can't remember if the old project files made it to your new hard drive.
But the biggest reason is CYA-- logbooks and notepads can be used in the event of someone suing you as legal documents, especially if you've been diligent about dates. It might also work during patent discussions as well, showing a clear date and time of ideas being made. During another life, I worked in biology labs, and electronic record keeping, because it's so fickle, wasn't allowed for the legal reasons of being able to show that the work you did was your own. That attitude has permeated my own project notetaking, and helping to keep track of everything I need to get done.
A: tools are not the answer, unless you already have the knowledge, organization, and self-discipline to use them well. i highly recommend Getting Things Done
A: You should have a look at No Kahuna Easy to use; Free and Pay versions; active, responsive development team.
A: I'm a big fan of http://trac.edgewall.org/'>trac for managing software projects. It provides task and bug management with integrated wiki and source control.
A: We have been using FogBugz for managing several projects (10+) and clients (20+) for more than 4 years.
We have a project for each product and another project for each client. In this way I can control the requirements for each product and the pending activities related to each client.
A: Try Omniplan if you're on a Mac. I find it just makes sense. I also find I don't end up fighting the interface and instead concentrate on using it to help me plan better.
Edit: It goes well with OmniFocus and no, I don't work for the Omni Group :)
A: If you are into Agile methods (or even if not) you could try some of the Agile tools out there. Look in http://www.agile-tools.net/ for some comparisons. I use xplanner at work where we coordinate requirements and work over iterations among several teams. It has its quirks but it generaly gets the work done and allows for some useful agile structure. I am sure some other will have preferences for more mature tools.
Trac (as Mark Roddy mentioned) is also nice, because it integrates a wiki, task and defect management, so it can be an interesting tool if you have none of those already in place.
A: I should say that we use Mantis now, but I wish it was better. I wish I could use it for customer-facing queries, I with I could open and assign issues by email.
ScrumWorks Pro looks promising, but amazingly expensive for me, with 15 developers.
AccuNote may be an option, but it is new to me
A: I'm using the customer support, project planning and issue management portions of OpenERP. Having your issues and feature requests, along with the tasks required to get them done on the same CRM that allows you to manage your customers is a big benefit.
A: I have used SourceGear Vault to manage all our software projects. Our business nature is very much driven by project basis - typically I have 5 active projects running at one period of time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Strange VB6 build problems (related to nlog) This I think is related to my use of the nlog C++ API (and my question on the nlog forum is here); the purpose of my asking this question here is to get a wider audience to my problem and perhaps to also get some more general ideas behind the VB6 IDE's failure to build in my particular scenario.
Briefly, the problem that I am having is that I am having trouble building VB6 components which reference unmanaged C++ components which have calls to nlog's C\C++ API (which is defined in NLogC.DLL). The build problems are not occurring during compile time, they are occurring when the binary is being built which suggests to me that it's some kind of linker type problem? Don't know enough about how VB6 binaries are produced to tell. The VB6 binary is produced, but it is corrupted and crashes shortly after it is invoked.
Has anyone had any similar experiences with VB6 (doesn't have to be related to nlog or C++)?
edit: Thanks for all the responses to this rather obscure problem. Still no headway unfortunately; my findings since I posted this:
*
*'Tweaking' the compile options doesn't appear to help in this problem.
*Adding a reference to the nlog-enabled C++ component from a 'blank' VB6 project doesn't crash it or cause weird build problems. So it isn't a 'native' VB6 issue, possibly an issue with the interaction between nlog and the various components and 3rd party libraries used by other referenced components?
*As for C++ calling conventions: the nlog-enabled C++ component is - as far as I can see - compliant to these conventions and indeed works fine when referenced by VB6 as long as it is not making any nlog API calls. Not sure if the nlogc.DLL itself is VB6 compliant but I would have thought that that is immaterial since the API calls are being made from the C++ component; VB6 shouldn't know or care about what the C++ component is referencing (that's as far as my understanding on this goes...)
edit2: I should also note that the error message obtained during build is: "Errors during load. Please refer to "xxx" for details". When I bring up the log file, all that there is in there is: "Cannot load control xxx". Interestingly, all references to that particular control disappears from that particular project resulting in compile errors if I were to try to build again.
A: Got around the problem by using NLog's COM interface (NLog.ComInterop.DLL) from my unmanaged C++ code. Not as easy to do as the C\C++ API but at least it doesn't crash my VB6 components.
A: I would try tweaking some of the Compile options found in the Project, Properties menu, Compile panel to see if they yield any additional hints as to what is going wrong.
For example if you compile the executable to p-code rather than native code does it still crash on startup.
A: What error message do you get when you run your compiled binary?
I doubt the compiler/linker is the problem: project references in a VB6 project are not linked into the final executable. A project reference in VB6 is actually a reference to a COM type library (which may or may not be embedded in a .dll or other binary file type). Project references primarily serve two purposes:
*
*The IDE extracts type information from the referenced type libraries which it then displays in the Object Browser (and in the Intellisense drop-down)
*At compile-time, the compiler extracts the type information stored in the referenced libraries, including the CLSID of each class that you instantiate, and embeds this data into the executable. This allows your executable to create instances of classes contained in the libraries that you referenced.
Note that the compiled binary doesn't link to any code in the referenced libraries, and it doesn't even contain the filenames of the referenced libraries. The final executable only contains the CLSID's and other type information that it needs to instantiate COM objects at run-time.
It is much more likely that the issue is with NLog, or with how you are calling it from your code, rather than something gone awry in the VB6 compile process.
A: If you think it might be a linker problem, this should crash it the same way:
*
*create a new standard project (of any kind)
*add a new module and copy the "declare"-statements into it
*compile
If it doesn't crash it is something else.
A: It would help an exact description of the error or a screenshot of what going on.
One thing to check is wherever NLogC.DLL or the C++ DLL you built have the correct calling convention defined. Basically you can't have the DLL function names mangled or use anything but the STDCALL calling convention. If the C++ DLL has not been created with those two things in mind then it will fail to work with VB6.
MSDN Article on Calling convention.
A: "Cannot load control xxx" errors can be caused by .oca files which were created from a different version of an .ocx than currently used. If that is the case, deleting the .oca files helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Deploying Perl to a share nothing cluster Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
*
*Take down half the cluster
*Copy Perl modules ( CPAN style modules ) to downed cluster members
*ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
*Confirm deployment
*In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
A: Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
A: Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
A: I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
A: We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
A: If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
A: Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
A: I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: SVN installation I am going to install SVN for my personal projects. Is it better to install it on a spare machine(Win XP) or will I save myself grief if I install it on the machine I use for development(Vista).
A: Installing your repository on a separate machine is probably a better idea, since at a minimum, it will allow your source code to survive a hard drive crash on your development machine.
If you're new to SVN, you can't beat the free e-book from Red-Bean and O'Reilly ... Check out "Version Control with Subversion" here: http://svnbook.red-bean.com/.
A: I recommend VisualSVN Server once you're ready to install...
A: There's really little point in installing on a "spare" machine. It doesn't consume any significant CPU or memory.
Other good reasons to install it on your main system:
*
*Faster repository access; not as big an issue with SVN as CVS, but checkins, checkouts, etc will be significantly faster with a local repo than one over the network.
*More likely to be backed up. You are backing up your dev box, right? Right? If not, there's a really good reason to. And usually boxes that you work on regularly are more likely to get backed up than ones sitting off in a corner somewhere.
*Less power consumption, presuming the "spare" box is otherwise off.
*As a really minor point, you won't have to muck around with network-based access, but this really isn't difficult in the first place.
The only good reason I can think to have it on a separate box is a single point of failure. If your Vista box kicks the bit bucket, then you're dead in the water. But hey, you were backing it up. Right? RIGHT?
A: In my opinion you HAVE to install it on another machine, and preferably one offsite and available over the internet. Doing it on another machine provides several advantages:
*
*You can do whatever you like to your dev machine config-wise and not worry about hosing your svn installation
*The repo acts as a backup of your code, so if you have some sort of disaster you can get your code back
*If the machine is available over the internet, you can work on your code anywhere on any machine
*You can easily ask people to look at your code by checking it out from the SVN. They may even contribute some code back!
*For me at least, there's some sort of significance to checking in the code. I think if the repo was on another machine, you would make sure your code was worth committing first.
Perhaps look at one of the free hosted services, like assembla.com. Have fun!
A: I currently use a hosted SVN server, this frees me from all the installation issues. I have also the benefit of having an off-site backup, so if my office gets on fire my source code will be safe.
Dreamhost hosts SVN even in the cheapest plan and you can install it with a single-click, no needing of SVN configuration knowledge is required.
A: Consider grabbing the Buildix application server from Thoughtworks & run it in a VM. You'll get a SVN server as well as a bunch of other goodies and, if you're ready to commit to it, you can consider installing it on a second box.
A: I'd prefer it on a different machine for flexibility (you could use a different system or get a new machine without impacting the repository) and for safety/security. By having it on a different machine, you eliminated the chance of losing everything if your machine dies.
A: If it's just for your personal projects and you always use the same machine anyway, just install it on the same machine. That is simpler.
A: Alternatively, you can install the repository on your development machine and use any file-replication utility to replicate it on a backup machine. I personally use Foldershare (http://foldershare.com) to replicate my repo.
A: If you you install it on a remote machine you will also need to install a server service to get and send files between the development environment and your repository. This can be done with the svnserve daemon or by modifying A httpd server like apache.
If you set it up on the local machine you don't need to know or do any of the above, just run the install and then use your favourite client to interact with it.
A: If you have it on the same machine it is trivially easy to get up and running. You don't have to worry about how you're going to connect to the repository, whether you're going to secure the connection, etc. By trivially easy, I mean, svnadmin create c:\repo.
If you set it up on a separate machine and work out the connectivity issues ahead of time, you will save yourself some time down the road if you get a new development machine or if someone else starts working with you. As for backups, it's true that having the repo on a separate machines means that you have some of the code in two places. However, you still need backups for your repository. Otherwise, when your repository goes down, you'll only have the versions of the files that you have checked out. History (and isn't that the point of version control) will be lost forever.
A: If you are installing it yourself on windows may I recommend SlikSVN http://www.sliksvn.com/en/download/ it'sa lot easier than configuring apache.
They also do hosted SVN but I don't have experience of them
A: An off-site hosted SVN is probably the easiest way of getting it set up. CVSDude actually offers one free repository of 2mb for free. ProjectLocker, CVSDude, and DreamHost all offer paid plans which cost a nominal fee ($5-$20) per month, and provide you with the ability to open the SVN repository to other users, and provide trac, and a few other services.
If you want to set it up at home, and you have a spare computer that you feel comfortable leaving running all the time, then certainly that's a better choice, giving you an automatic backup, as well as having rollback capabilities. 100MB ethernet is plenty fast, and even Wireless G shouldn't have any speed problems with SVN.
Local SVN doesn't give you much except for rollback, which is better than nothing, but still not perfect.
A: IMHO, you can install it on your machine to avoid maintenance of a second machine.
You also should backup your repository on a network drive or on an external media to avoid data loss.
Once you have your data in a safe place, you can very easily reinstall subversion on any available machine with any OS on it.
My company's Subversion installed on windows too, and I'm using hot-backup.py which runs once an hour from scheduler.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Is there a Dependency Injection framework for PHP4? I'm stuck on a PHP 4 server, and I would like to start moving an old legacy project to modern Design Patterns, including Dependency Injection. Are there any dependency injection frameworks that will work with PHP 4?
A: Most dependency injection frameworks use reflection to determine dependencies. Since PHP4 doesn't have typehints, you can't really do this. Experiments have been made with using config files - Some times embedded in comments in the code (Often called annotations). While this works, I find it a bit clunky. In my opinion, you're better off using PHP's dynamic nature to your advantage, than to try and apply statically typed solutions to it. You can get a long way with hand-crafted factories. See for example this post on how.
A: i found this (drip), but it looks like it hasn't been updated in a few years.
A: I don't think a dependency injection framework will really work on PHP because of the way object-oriented programs are structured in it. First of all, it's not like C# or Java wherein the binaries are already there and you just have to find a way to instantiate this object and inject it into another. PHP has to load the class files and interpret them before it can use them. So if you have deep inheritance hierarchies with PHP I don't think that's a good idea.
Given that PHP is a scripting language, it's best to leverage it as that -- a scripting language. Which means, I would just make use of simple factory or builder methods to do something similar to dependency injection. I wouldn't burden it with a DI framework that will only add to the stuff the PHP runtime has to process for every web request (unless you do opcode caching, but there will still be overhead that's not incurred by web platforms for Java and .NET). If I have to change the objects that will be injected into objects or how they are created, it would be a simple task to just edit the script that contains the factory/builder methods. No need to recompile there anyway. So I have flexibility and I have a lightweight architecture that's suitable for the PHP way of doing things.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is the best method to convert floating point to an integer in JavaScript? There are several different methods for converting floating point numbers to Integers in JavaScript. My question is what method gives the best performance, is most compatible, or is considered the best practice?
Here are a few methods that I know of:
var a = 2.5;
window.parseInt(a); // 2
Math.floor(a); // 2
a | 0; // 2
I'm sure there are others out there. Suggestions?
A: From "Javascript: The Good Parts" from Douglas Crockford:
Number.prototype.integer = function () {
return Math[this < 0 ? 'ceil' : 'floor'](this);
}
Doing that your are adding a method to every Number object.
Then you can use it like that:
var x = 1.2, y = -1.2;
x.integer(); // 1
y.integer(); // -1
(-10 / 3).integer(); // -3
A: The 'best' way depends on:
*
*rounding mode: what type of rounding (of the float to integer) you expect/require
for positive and/or negative numbers that have a fractional part.
Common examples:
float | trunc | floor | ceil | near (half up)
------+-------+-------+-------+---------------
+∞ | +∞ | +∞ | +∞ | +∞
+2.75 | +2 | +2 | +3 | +3
+2.5 | +2 | +2 | +3 | +3
+2.25 | +2 | +2 | +3 | +2
+0 | +0 | +0 | +0 | +0
NaN | NaN | NaN | NaN | NaN
-0 | -0 | -0 | -0 | -0
-2.25 | -2 | -3 | -2 | -2
-2.5 | -2 | -3 | -2 | -2
-2.75 | -2 | -3 | -2 | -3
-∞ | -∞ | -∞ | -∞ | -∞
For float to integer conversions we commonly expect "truncation"
(aka "round towards zero" aka "round away from infinity").
Effectively this just 'chops off' the fractional part of a floating point number.
Most techniques and (internally) built-in methods behave this way.
*input: how your (floating point) number is represented:
*
*String
Commonly radix/base: 10 (decimal)
*floating point ('internal') Number
*output: what you want to do with the resulting value:
*
*(intermediate) output String (default radix 10) (on screen)
*perform further calculations on resulting value
*range:
in what numerical range do you expect input/calculation-results
and for which range do you expect corresponding 'correct' output.
Only after these considerations are answered we can think about appropriate method(s) and speed!
Per ECMAScript 262 spec: all numbers (type Number) in javascript are represented/stored in:
"IEEE 754 Double Precision Floating Point (binary64)" format.
So integers are also represented in the same floating point format (as numbers without a fraction).
Note: most implementations do use more efficient (for speed and memory-size) integer-types internally when possible!
As this format stores 1 sign bit, 11 exponent bits and the first 53 significant bits ("mantissa"), we can say that: only Number-values between -252 and +252 can have a fraction.
In other words: all representable positive and negative Number-values between 252 to (almost) 2(211/2=1024) (at which point the format calls it a day Infinity) are already integers (internally rounded, as there are no bits left to represent the remaining fractional and/or least significant integer digits).
And there is the first 'gotcha':
You can not control the internal rounding-mode of Number-results for the built-in Literal/String to float conversions (rounding-mode: IEEE 754-2008 "round to nearest, ties to even") and built-in arithmetic operations (rounding-mode: IEEE 754-2008 "round-to-nearest").
For example:
252+0.25 = 4503599627370496.25 is rounded and stored as: 4503599627370496
252+0.50 = 4503599627370496.50 is rounded and stored as: 4503599627370496
252+0.75 = 4503599627370496.75 is rounded and stored as: 4503599627370497
252+1.25 = 4503599627370497.25 is rounded and stored as: 4503599627370497
252+1.50 = 4503599627370497.50 is rounded and stored as: 4503599627370498
252+1.75 = 4503599627370497.75 is rounded and stored as: 4503599627370498
252+2.50 = 4503599627370498.50 is rounded and stored as: 4503599627370498
252+3.50 = 4503599627370499.50 is rounded and stored as: 4503599627370500
To control rounding your Number needs a fractional part (and at least one bit to represent that), otherwise ceil/floor/trunc/near returns the integer you fed into it.
To correctly ceil/floor/trunc a Number up to x significant fractional decimal digit(s), we only care if the corresponding lowest and highest decimal fractional value will still give us a binary fractional value after rounding (so not being ceiled or floored to the next integer).
So, for example, if you expect 'correct' rounding (for ceil/floor/trunc) up to 1 significant fractional decimal digit (x.1 to x.9), we need at least 3 bits (not 4) to give us a binary fractional value:
0.1 is closer to 1/(23=8)=0.125 than it is to 0 and 0.9 is closer to 1-1/(23=8)=0.875 than it is to 1.
only up to ±2(53-3=50) will all representable values have a non-zero binary fraction for no more than the first significant decimal fractional digit (values x.1 to x.9).
For 2 decimals ±2(53-6=47), for 3 decimals ±2(53-9=44), for 4 decimals ±2(53-13=40), for 5 decimals ±2(53-16=37), for 6 decimals ±2(53-19=34), for 7 decimals ±2(53-23=30), for 8 decimals ±2(53-26=27), for 9 decimals ±2(53-29=24), for 10 decimals ±2(53-33=20), for 11 decimals ±2(53-36=17), etc..
A "Safe Integer" in javascript is an integer:
*
*that can be exactly represented as an IEEE-754 double precision number, and
*whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation
(even though ±253 (as an exact power of 2) can exactly be represented, it is not a safe integer because it could also have been ±(253+1) before it was rounded to fit into the maximum of 53 most significant bits).
This effectively defines a subset range of (safely representable) integers between -253 and +253:
*
*from: -(253 - 1) = -9007199254740991 (inclusive)
(a constant provided as static property Number.MIN_SAFE_INTEGER since ES6)
*to: +(253 - 1) = +9007199254740991 (inclusive)
(a constant provided as static property Number.MAX_SAFE_INTEGER since ES6)
Trivial polyfill for these 2 new ES6 constants:
Number.MIN_SAFE_INTEGER || (Number.MIN_SAFE_INTEGER=
-(Number.MAX_SAFE_INTEGER=9007199254740991) //Math.pow(2,53)-1
);
Since ES6 there is also a complimentary static method Number.isSafeInteger() which tests if the passed value is of type Number and is an integer within the safe integer range (returning a boolean true or false).
Note: will also return false for: NaN, Infinity and obviously String (even if it represents a number).
Polyfill example:
Number.isSafeInteger || (Number.isSafeInteger = function(value){
return typeof value === 'number' &&
value === Math.floor(value) &&
value < 9007199254740992 &&
value > -9007199254740992;
});
ECMAScript 2015 / ES6 provides a new static method Math.trunc()
to truncate a float to an integer:
Returns the integral part of the number x, removing any fractional digits. If x is already an integer, the result is x.
Or put simpler (MDN):
Unlike other three Math methods: Math.floor(), Math.ceil() and Math.round(), the way Math.trunc() works is very simple and straightforward:
just truncate the dot and the digits behind it, no matter whether the argument is a positive number or a negative number.
We can further explain (and polyfill) Math.trunc() as such:
Math.trunc || (Math.trunc = function(n){
return n < 0 ? Math.ceil(n) : Math.floor(n);
});
Note, the above polyfill's payload can potentially be better pre-optimized by the engine compared to:
Math[n < 0 ? 'ceil' : 'floor'](n);
Usage: Math.trunc(/* Number or String */)
Input: (Integer or Floating Point) Number (but will happily try to convert a String to a Number)
Output: (Integer) Number (but will happily try to convert Number to String in a string-context)
Range: -2^52 to +2^52 (beyond this we should expect 'rounding-errors' (and at some point scientific/exponential notation) plain and simply because our Number input in IEEE 754 already lost fractional precision: since Numbers between ±2^52 to ±2^53 are already internally rounded integers (for example 4503599627370509.5 is internally already represented as 4503599627370510) and beyond ±2^53 the integers also loose precision (powers of 2)).
Float to integer conversion by subtracting the Remainder (%) of a devision by 1:
Example: result = n-n%1 (or n-=n%1)
This should also truncate floats. Since the Remainder operator has a higher precedence than Subtraction we effectively get: (n)-(n%1).
For positive Numbers it's easy to see that this floors the value: (2.5) - (0.5) = 2,
for negative Numbers this ceils the value: (-2.5) - (-0.5) = -2 (because --=+ so (-2.5) + (0.5) = -2).
Since the input and output are Number we should get the same useful range and output compared to ES6 Math.trunc() (or it's polyfill).
Note: tough I fear (not sure) there might be differences: because we are doing arithmetic (which internally uses rounding mode "nearTiesEven" (aka Banker's Rounding)) on the original Number (the float) and a second derived Number (the fraction) this seems to invite compounding digital_representation and arithmetic rounding errors, thus potentially returning a float after all..
Float to integer conversion by (ab-)using bitwise operations:
This works by internally forcing a (floating point) Number conversion (truncation and overflow) to a signed 32-bit integer value (two's complement) by using a bitwise operation on a Number (and the result is converted back to a (floating point) Number which holds just the integer value).
Again, input and output is Number (and again silent conversion from String-input to Number and Number-output to String).
More important tough (and usually forgotten and not explained):
depending on bitwise operation and the number's sign, the useful range will be limited between:
-2^31 to +2^31 (like ~~num or num|0 or num>>0) OR 0 to +2^32 (num>>>0).
This should be further clarified by the following lookup-table (containing all 'critical' examples):
n | n>>0 OR n<<0 OR | n>>>0 | n < 0 ? -(-n>>>0) : n>>>0
| n|0 OR n^0 OR ~~n | |
| OR n&0xffffffff | |
----------------------------+-------------------+-------------+---------------------------
+4294967298.5 = (+2^32)+2.5 | +2 | +2 | +2
+4294967297.5 = (+2^32)+1.5 | +1 | +1 | +1
+4294967296.5 = (+2^32)+0.5 | 0 | 0 | 0
+4294967296 = (+2^32) | 0 | 0 | 0
+4294967295.5 = (+2^32)-0.5 | -1 | +4294967295 | +4294967295
+4294967294.5 = (+2^32)-1.5 | -2 | +4294967294 | +4294967294
etc... | etc... | etc... | etc...
+2147483649.5 = (+2^31)+1.5 | -2147483647 | +2147483649 | +2147483649
+2147483648.5 = (+2^31)+0.5 | -2147483648 | +2147483648 | +2147483648
+2147483648 = (+2^31) | -2147483648 | +2147483648 | +2147483648
+2147483647.5 = (+2^31)-0.5 | +2147483647 | +2147483647 | +2147483647
+2147483646.5 = (+2^31)-1.5 | +2147483646 | +2147483646 | +2147483646
etc... | etc... | etc... | etc...
+1.5 | +1 | +1 | +1
+0.5 | 0 | 0 | 0
0 | 0 | 0 | 0
-0.5 | 0 | 0 | 0
-1.5 | -1 | +4294967295 | -1
etc... | etc... | etc... | etc...
-2147483646.5 = (-2^31)+1.5 | -2147483646 | +2147483650 | -2147483646
-2147483647.5 = (-2^31)+0.5 | -2147483647 | +2147483649 | -2147483647
-2147483648 = (-2^31) | -2147483648 | +2147483648 | -2147483648
-2147483648.5 = (-2^31)-0.5 | -2147483648 | +2147483648 | -2147483648
-2147483649.5 = (-2^31)-1.5 | +2147483647 | +2147483647 | -2147483649
-2147483650.5 = (-2^31)-2.5 | +2147483646 | +2147483646 | -2147483650
etc... | etc... | etc... | etc...
-4294967294.5 = (-2^32)+1.5 | +2 | +2 | -4294967294
-4294967295.5 = (-2^32)+0.5 | +1 | +1 | -4294967295
-4294967296 = (-2^32) | 0 | 0 | 0
-4294967296.5 = (-2^32)-0.5 | 0 | 0 | 0
-4294967297.5 = (-2^32)-1.5 | -1 | +4294967295 | -1
-4294967298.5 = (-2^32)-2.5 | -2 | +4294967294 | -2
Note 1: the last column has extended range 0 to -4294967295 using (n < 0 ? -(-n>>>0) : n>>>0).
Note 2: bitwise introduces its own conversion-overhead(s) (severity vs Math depends on actual implementation, so bitwise could be faster (often on older historic browsers)).
Obviously, if your 'floating point' number was a String to begin with,
parseInt(/*String*/, /*Radix*/) would be an appropriate choice to parse it into a integer Number.
parseInt() will truncate as well (for positive and negative numbers).
The range is again limited to IEEE 754 double precision floating point as explained above for the Math method(s).
Finally, if you have a String and expect a String as output you could also chop of the radix point and fraction (which also gives you a larger accurate truncation range compared to IEEE 754 double precision floating point (±2^52))!
EXTRA:
From the info above you should now have all you need to know.
If for example you'd want round away from zero (aka round towards infinity) you could modify the Math.trunc() polyfill, for example:
Math.intToInf || (Math.intToInf = function(n){
return n < 0 ? Math.floor(n) : Math.ceil(n);
});
A: The answer has already been given but just to be clear.
Use the Math library for this. round, ceil or floor functions.
parseInt is for converting a string to an int which is not what is needed here
toFixed is for converting a float to a string also not what is needed here
Since the Math functions will not be doing any conversions to or from a string it will be faster than any of the other choices which are wrong anyway.
A: According to this website:
parseInt is occasionally used as a means of turning a floating point number into an integer. It is very ill suited to that task because if its argument is of numeric type it will first be converted into a string and then parsed as a number...
For rounding numbers to integers one of Math.round, Math.ceil and Math.floor are preferable...
A: You can use Number(a).toFixed(0);
Or even just a.toFixed(0);
Edit:
That's rounding to 0 places, slightly different than truncating, and as someone else suggested, toFixed returns a string, not a raw integer. Useful for display purposes.
var num = 2.7; // typeof num is "Number"
num.toFixed(0) == "3"
A: var i = parseInt(n, 10);
If you don't specify a radix values like '010' will be treated as octal (and so the result will be 8 not 10).
A: Using bitwise operators. It may not be the clearest way of converting to an integer, but it works on any kind of datatype.
Suppose your function takes an argument value, and the function works in such a way that value must always be an integer (and 0 is accepted). Then any of the following will assign value as an integer:
value = ~~(value)
value = value | 0;
value = value & 0xFF; // one byte; use this if you want to limit the integer to
// a predefined number of bits/bytes
The best part is that this works with strings (what you might get from a text input, etc) that are numbers ~~("123.45") === 123. Any non numeric values result in 0, ie,
~~(undefined) === 0
~~(NaN) === 0
~~("ABC") === 0
It does work with hexadecimal numbers as strings (with a 0x prefix)
~~("0xAF") === 175
There is some type coercion involved, I suppose. I'll do some performance tests to compare these to parseInt() and Math.floor(), but I like having the extra convenience of no Errors being thrown and getting a 0 for non-numbers
A: So I made a benchmark, on Chrome when the input is already a number, the fastest would be ~~num and num|0, half speed: Math.floor, and the slowest would be parseInt see here
EDIT: it seems there are already another person who made rounding benchmark (more result) and additional ways: num>>0 (as fast as |0) and num - num%1 (sometimes fast)
A: Apparently double bitwise-not is the fastest way to floor a number:
var x = 2.5;
console.log(~~x); // 2
Used to be an article here, getting a 404 now though: http://james.padolsey.com/javascript/double-bitwise-not/
Google has it cached: http://74.125.155.132/search?q=cache:wpZnhsbJGt0J:james.padolsey.com/javascript/double-bitwise-not/+double+bitwise+not&cd=1&hl=en&ct=clnk&gl=us
But the Wayback Machine saves the day! http://web.archive.org/web/20100422040551/http://james.padolsey.com/javascript/double-bitwise-not/
A: The question appears to be asking specifically about converting from a float to an int. My understanding is that the way to do this is to use toFixed. So...
var myFloat = 2.5;
var myInt = myFloat.toFixed(0);
Does anyone know if Math.floor() is more or less performant than Number.toFixed()?
A: parseInt() is probably the best one. a | 0 doesn't do what you really want (it just assigns 0 if a is an undefined or null value, which means an empty object or array passes the test), and Math.floor works by some type trickery (it basically calls parseInt() in the background).
A: you could also do it this way:
var string = '1';
var integer = a * 1;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Sources for learning about Scheme Macros: define-syntax and syntax-rules I've read JRM's Syntax-rules Primer for the Merely Eccentric and it has helped me understand syntax-rules and how it's different from common-lisp's define-macro. syntax-rules is only one way of implementing a syntax transformer within define-syntax.
I'm looking for two things, the first is more examples and explanations of syntax-rules and the second is good sources for learning the other ways of using define-syntax. What resources do you recommend?
A: To answer your second question: syntax-case is the other form that goes inside define-syntax. Kent Dybvig is the primary proponent of syntax-case, and he has a tutorial on using it [PDF].
I also read the PLT Scheme documentation on syntax-case for a few more examples, and to learn about the variation in implementation.
A: The JRM Syntax-rules primer is quite good, but Chapter 36 of Programming Languages: Application and Interpretation, by Shriram Krishnamurti http://www.cs.brown.edu/~sk/Publications/Books/ProgLangs/) also has good coverage of writing Scheme macros. That material has been used and improved over several short articles, tech reports, etc, over the past 10 years, so it's not a 'this was true about the X implementation of Scheme in 1983 that is no longer accessible' paper.
A: The list of resources at The Scheme Cookbook is a great place to start.
If you prefer papers, then don't hessitate to visit readscheme.org.
A: Fear of Macros is a practical guide for using macros in Racket. It shows many ways of using define-syntax. The material may not be completely applicable to Scheme, but you might find some of the examples useful for knowing what can be done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Using AppleScript to hide Keynote Text Fields in A Slide I am no AppleScript Jedi, I've only done a few simple things, but I haven't been able to figure this one out and could use some help:
My wife uses slides for her Art History courses and would like to use the same slides for exams (sans identifying names). Rather than create a new presentation, I'd like a tool that iterates through the slides and hides the text fields.
Looking through the Keynote dictionary didn't give me any clues as to how to approach this, any ideas?
A: AFAIK, with Applescript you can only access the title and the body boxes of the slides. If the text you wish to remove is consistently in either of these boxes the simplest solution would be to loop through the slides replacing that text and then saving a copy of the document.
tell application "Keynote"
open "/Path/To/Document"
repeat with currentSlide in slides of first slideshow
set title of currentSlide to " "
set body of currentSlide to " "
end repeat
save first slideshow in "/Path/To/Document without answers"
end tell
If the text is in a container created with the textbox tool, I don't think you can solve it with Applescript, but Keynote uses an XML based file format, so you could try doing it by editing the XML with your scripting language of choice. The XML schema is documented in the iWork Programming Guide.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can a C program produce a core dump of itself without terminating? I want a C program to produce a core dump under certain circumstances. This is a program that runs in a production environment and isn't easily stopped and restarted to adjust other kinds of debugging code. Also, since it's in a production environment, I don't want to call abort(). The issues under investigation aren't easily replicated in a non-production environment. What I'd like is for the program, when it detects certain issues, to produce a core dump on its own, preferably with enough information to rename the file, and then continue.
A: Another way might be to use the Google Coredumper library. This creates a similar result to the fork+abort technique but plays nicer with multithreaded apps (suspends all threads for a little while before forking so that they don't make a mess in the child).
Example:
#include <google/coredumper.h>
...
WriteCoreDump('core.myprogram');
/* Keep going, we generated a core file,
* but we didn't crash.
*/
A: void create_dump(void)
{
if(!fork()) {
// Crash the app in your favorite way here
*((void*)0) = 42;
}
}
Fork the process then crash the child - it'll give you a snapshot whenever you want
A: Sun describes how to get a core file on Solaris, HP-UX, Redhat, and Windows here.
Solaris has the gcore program. HP-UX may have it.
Otherwise use gdb and its gcore commmand.
Windows has win-dbg-root\tlist.exe and win-dbg-root\adplus.vbs
A: Do you really want a core, or just a stacktrace ?
If all you want is a stacktrace you could take a look at the opensource here and try and integrate the code from there, or maybe just calling it from the command line is enough.
I believe some code in the gdb project might also be useful.
Another think you might want to do is to use gdb to attach to a running process.
$ gdb /path/to/exec 1234 # 1234 is the pid of the running process
A: The source code to produce a core dump is in 'gcore', which is part of the gdb package.
Also, the Sun has gcore.
Also, you have to have a separate process running the core dump, as the current process must be suspended. You'll find the details in the gcore source, or you can just run your platform's gcore with your process as the target.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: C++ tr1 on GCC 3.4.4 (for the Nokia N810 tablet computer) What does it take to get C++ tr1 members (shared_ptr especially, but we'd like function and bind and ALL the others) working with GCC 3.4.4 (for the Nokia N810 tablet computer).
Has anyone done this? Attempted this?
It may not be feasible for us to upgrade to GCC 4.x to cross-compile for this device (but if you've done that, we'd love to know).
There may be many approaches, and I'd like to avoid dead ends others have hit.
We're trying to avoid bringing in boost, since it can be pretty interdependent (you bring in one boost header and you end up with 20 more), and keeping code size down is important to us.
Thank you!
A: For shared_ptr from boost, at least, the number of dependencies is pretty small. You can use bcp to extract subsets from boost as well, so you don't need the whole of boost to compile your application.
A: Are you able to use Boost (Smart Pointers, Bind, Function) directly? From the GCC changelogs, the actual std::tr1 stuff is only available from 4.0 onwards. :-(
I know there are differences between the Boost and std::tr1, but hopefully these can be worked around satisfactorily. Let me know how it works for you. :-)
A: On a different topic, I'm installing the g++-3.4 package on my Ubuntu laptop now, and I'm going to see if I can compile libstdc++ from the Subversion trunk on it. If so, I'll let you know! (If not, I'll let you know too. :-P)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Forcing a ListView to draw in the background I have a ListView which sometimes I need to put around 10000 items in. ListViews don't really handle this well, and they lock up for a couple of seconds while they sort the items and draw them. If you add the items in individually, it's even worse, locking up for nearly a minute.
To get around this, I thought I'd try populating the ListView before I need to display it, but unfortunately it has other ideas. It only starts drawing when I turn the panel that contains the ListView visible, making the program hang for a couple of seconds.
Any ideas for how I can eliminate this delay? Or is there another component that's relatively easy to use that is better at showing large quantities of data?
A: You need to use the VirtualMode.
A: Well. If you just want to load the content in the background you could try a thread to populate the ListView, which will let the form load.
I don't think you will get the pause if you put an Application.DoEvents(); when you are loading the items (which allows the form to redraw and receive events).
for (int ix=0; ix < 10000; ix ++)
{
listView1.Items.Add(ix.ToString());
Application.DoEvents();
}
I guess my suggestions are good if you aren't aware of VirtualMode
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Do I have to cause an ValueError in Python I have this code:
chars = #some list
try:
indx = chars.index(chars)
except ValueError:
#doSomething
else:
#doSomethingElse
I want to be able to do this because I don't like knowfully causing Exceptions:
chars = #some list
indx = chars.index(chars)
if indx == -1:
#doSomething
else:
#doSomethingElse
Is there a way I can do this?
A: Note that the latter approach is going against the generally accepted "pythonic" philosophy of EAFP, or "It is Easier to Ask for Forgiveness than Permission.", while the former follows it.
A: if element in mylist:
index = mylist.index(element)
# ... do something
else:
# ... do something else
A: For the specific case where your list is a sequence of single-character strings you can get what you want by changing the list to be searched to a string in advance (eg. ''.join(chars)).
You can then use the .find() method, which does work as you want. However, there's no corresponding method for lists or tuples.
Another possible option is to use a dictionary instead. eg.
d = dict((x, loc) for (loc,x) in enumerate(chars))
...
index = d.get(chars_to_find, -1) # Second argument is default if not found.
This may also perform better if you're doing a lot of searches on the list. If it's just a single search on a throwaway list though, its not worth doing.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do you apply a .net attribute to a return type How do I apply the MarshalAsAttribute to the return type of the code below?
public ISomething Foo()
{
return new MyFoo();
}
A: According to http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute.aspx:
[return: MarshalAs(<your marshal type>)]
public ISomething Foo()
{
return new MyFoo();
}
A: [return:MarshalAs]
public ISomething Foo()
{
return new MyFoo();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: EWOULDBLOCK equivalent errno under Windows Perl G'day Stackoverflowers,
I'm the author of Perl's autodie pragma, which changes Perl's built-ins to throw exceptions on failure. It's similar to Fatal, but with lexical scope, an extensible exception model, more intelligent return checking, and much, much nicer error messages. It will be replacing the Fatal module in future releases of Perl (provisionally 5.10.1+), but can currently be downloaded from the CPAN for Perl 5.8.0 and above.
The next release of autodie will add special handling for calls to flock with the LOCK_NB (non-blocking) option. While a failed flock call would normally result in an exception under autodie, a failed call to flock using LOCK_NB will merely return false if the returned errno ($!) is EWOULDBLOCK.
The reason for this is so people can continue to write code like:
use Fcntl qw(:flock);
use autodie; # All perl built-ins now succeed or die.
open(my $fh, '<', 'some_file.txt');
my $lock = flock($fh, LOCK_EX | LOCK_NB); # Lock the file if we can.
if ($lock) {
# Opportuntistically do something with the locked file.
}
In the above code, a lock that fails because someone else has the file locked already (EWOULDBLOCK) is not considered to be a hard error, so autodying flock merely returns a false value. In the situation that we're working with a filesystem that doesn't support file-locks, or a network filesystem and the network just died, then autodying flock generates an appropriate exception when it sees that our errno is not EWOULDBLOCK.
This works just fine in my dev version on Unix-flavoured systems, but it fails horribly under Windows. It appears that while Perl under Windows supports the LOCK_NB option, it doesn't define EWOULDBLOCK. Instead, the errno returned is 33 ("Domain error") when blocking would occur.
Obviously I can hard-code this as a constant into autodie, but that's not what I want to do here, because it means that I'm screwed if the errno ever changes (or has changed). I would love to compare it to the Windows equivalent of POSIX::EWOULDBLOCK, but I can't for the life of me find where such a thing would be defined. If you can help, let me know.
Answers I specifically don't want:
*
*Suggestions to hard-code it as a constant (or worse still, leave a magic number floating about).
*Not supporting LOCK_NB functionality at all under Windows.
*Assuming that any failure from a LOCK_NB call to flock should return merely false.
*Suggestions that I ask on p5p or perlmonks. I already know about them.
*An explanation of how flock, or exceptions, or Fatal work. I already know. Intimately.
A: For the Windows-specific error code, you want to use $^E. In this case, it's 33: "The process cannot access the file because another process has locked a portion of the file" (ERROR_LOCK_VIOLATION in winerror.h).
Unfortunately, I don't think Win32::WinError is in core. On the other hand, if Microsoft ever renumbered the Windows error codes, pretty much every Windows program ever written would stop working, so I don't think there'll be a problem with hardcoding it.
A: Under Win32 "native" Perl, note that $^E is more descriptive at 33, "The process cannot access the file because another process locked a portion of the file" which is ERROR_LOCK_VIOLATION (available from Win32::WinError).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: WSE 2.0 Web Services Client using .NET 2.0 What's the best way to access WSE 2.0 web services from .NET 2.0?
Using VS2005's web references is not working, because generated classes are using System.Web.Services as their base (instead of Microsoft.Web.Services2).
A: We use VS2003 to generate and update the web references with a batch file to copy them to the vs2005 and now 2008 project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Preventing Mongrel/Mysql Errno::EPIPE exceptions I have a rails app that I have serving up XML on an infrequent basis.
This is being run with mongrel and mysql.
I've found that if I don't exercise the app for longer than a few hours it goes dead and starts throwing Errno::EPIPE errors. It seems that the mysql connection get timed out for inactivity or something like that.
It can be restarted with 'mongrel_rails restart -P /path/to/the/mongrel.pid' ... but that's not really a solution.
My collaborator expects the app to be there when he is working on his part (and I am most likely not around).
My question is:
*
*What can I do to prevent this problem from occurring in the 1st place? (e.g. don't time me out!!).
*Failing that, is there some code I can insert somewhere to automatically remake the Db connection?
A: Here's a solution:
https://boxpanel.blueboxgrp.com/public/the_vault/index.php/Mongrel_/_MySQL_Timeout
The timeouts on the above solution seem a little high to me. You don't want your DB timeouts to be too low, because of the amount of memory a connection can use. If a connection is orphaned, you want it to time out reasonably (like not in one week.)
A: In other places, I also got the following suggestions:
*
*Try setting
config.active_record.verification_timeout to something lower than whatever
your mysql connection timeout setting is.
*There's a gem to work around this problem: mysql_retry_lost_connection
http://rubyforge.org/projects/zventstools/
"Reconnect to the MySQL server when you hit a lost connection error".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: max time a report should take to generate report I am just curious to know how long, in minutes, does the reporting service take to generate report when it returns 1MB of data. Maybe using views and table is properly index. SSRS reporting and server side generation.
A: Report generation time has two components:
- Data Acquisition time
- Render Time
So for 1 Mb of data, how many records (rows) are we talking? How many pages will the report have? How many controls per page? Does the report use charting? These are the factors that will determine generation time.
For most reports, data acquisition time is the most significant factor. Your report is never going to run faster than the raw data acquisition. So if you are using SQL, the report can't generate faster than the time required to run the query. I have seen queries that return far more than 1Mb of data very quickly. I have also seen queries that return very little data, that run for a long time.
On the render side, there are a couple of things that that can cause a report to run be slow. The first is in report aggregation. If a report needs to receive all of the records prior to starting rendering, then its performance will suffer. In particular, depending on the reporting tool. With large data sets (more than 10,000 records), you can have significant improvements in rendering by doing aggregation at the source (DB). The other is charting, which typically involves heavy rendering overhead and aggregation.
Most reporting systems allow you to build in timers or logging that will help you to performance tune the report. It is best to build timers into the report that will tell you what percentage of time the report is spending getting the data, and what percentage is spent rendering. When you have this information, you will know where to focus your energies.
If you are really trying to evaluate the performance of the reporting tool, the best way is to build a report that either reads a flat file or generates the data through code. In other words, eliminate the impact of the database and see how fast your reporting tool can generate pages.
Hope this helps.
A: How long is acceptable? Depends on what it's doing, how much it's run, things like that. Anything below 30 seconds would be fine if it's run once every day or two. If it's run once a week or once a month that number could be a lot higher.
A: The report itself is generally very fast, if you're seeing a hangup you may want to check the execution time of the query which generates the data. A complex query can take a long time, even if it only returns a little data...
A: I've found, when using BIRT and other reporting systems that the best improvements tend to come by offloading most of the work to the database at the back end.
In other words, don't send lots of data across the wire and sort or group it locally. The database is almost certainly going to outperform you with its SQL orderby and groupby clauses and optimizing indexes (among other things).
That way, you get faster extraction of the data you want AND less network traffic.
A: As several said already, a general question like this really can't be answered. However, I wrote up Turbo-charge Your Report Speed – General Rules & Guidelines (disclaimer - I'm the CTO at Windward Reports, a competitor of SSRS). I think that will help you look for what you can do to speed up the process.
And with all the caveats about the specifics matter a lot, on a 3GHz workstation we generally see 7-30 pages/second. Keep in mind this is numbers for Windward, not SSRS.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What am I missing by not moving my ASP.NET 2.0 site to ASP.NET 3.5? I have a web application using ASP.NET 2.0 and I want to know if I should be moving it to ASP.NET 3.5, particularly... what am I missing by not moving to ASP.NET 3.5?
I understand the disadvantages, but I don't understand the advantages.
What are the biggest benefits of migrating/rewriting?
Will I get a speed improvement?
Is MVC that much easier than my old-fashioned WebForm application?
Will it look cooler?
A: You will only miss access to the newer .NET 3.5 libraries, and cool syntax such as LINQ and lambda expressions. Performance wise they will run the same.
By the way, ASP.NET MVC is NOT included with .NET 3.5...yet.
A: New C# 3.0 compiler features.
A: Yes, MVC is that much easier than your old-fashioned WebForm application.
So is LINQ to SQL.
A: I'd say the biggest thing is Linq. At least it is for us, as we're completely replacing the old data layer with it! (Slowly, but surely.)
A: There are also other MVC framework that works with .net2 (monorail, promesh,...), so mvc is not related to framework version, it is just a pattern.
But, new framework features that I use and find useful:
*
*LINQ, LINQ2SQL
*Extension methods
*WCF services
*WF
A: LINQ, dude. LINQ. Don't knock it 'till you've tried it. ORM is fun again!
A: LINQ, but not LINQ to SQL (which I don't really like). LINQ to XML and LINQ to Objects are fantastic.
A: Lambda expressions FTW! Linq's extension methods for collections combined with lambda expressions are awesome.
A: Nobody has mentioned Extension methods yet?!? See http://weblogs.asp.net/scottgu/archive/2007/03/13/new-orcas-language-feature-extension-methods.aspx
And the items above (especially LINQ, Lambda expression, object, collection, and property initializers, etc.).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Which browsers and operating systems do you target on new websites? When you are working on a new website, what combinations of browsers and operating systems do you target, and at what priorities? Do you find targeting a few specific combinations (and ignoring the rest) better than trying to strive to make them all work as intended?
Common browsers:
*
*Firefox (1.5, 2, 3)
*Internet Explorer (6, 7, 8-beta)
*Opera
*Chrome
Common operating systems:
*
*Windows (XP, Vista)
*Mac OSX
*Linux
*Unix
A: Mainly I just target browsers as the sites I've built don't really depend on anything OS specific. As mentioned above, YAHOO's graded browser support guide is a good starting point on determining which browsers yous should/could support. And Yahoo's User Interface library (CSS+JavaScript) helps massively in achieving this.
But when developing sites I primarily do it on Firefox2 as it has the best web developing tools (firebug + wed developer toolkit). Then I also test my sites with Opera 9.5 as it's my browser of choice for browsing. I've previously lost all hope on supporting IE6 at any reasonable level so these days I just inform my users to upgrade to IE7 which is almost capable of displaying sites similarly to FF2/3+Chrome+Opera.
FF3 and Chrome are so new at the moment that I tend to ignore them, but I must say: They're friggin fast! My javascript/css heavy sites are noticeably faster with them.
A: I'm doing:
*
*Firefox 2 and up
*IE 7 and up
*Konquorer or Safari (or maybe now Chrome)
A: Yahoo's graded browser support is a good guide:
A: It depends on your audience. If you are heavy on tech users, you may have 50% of you users as Firefox. If you have lots of mom and dads, you will probably have 75-80% of your users being IE 6 or 7. You probably need to get a alhpa/beta out with Google analytics so you can get a measure of your audience.
A: Where I work, we target
*
*Firefox 2 and 3 on Windows
*Firefox 2 and 3 on Mac
*Safari on Windows and Mac
*IE 6 and 7
We are not specifically targeting any Linux browsers, but if they work in the list above, there's a good chance they work everywhere. We are also testing against Google's Chrome browser on Windows now.
A: I just figured out this week that if you bend a little and figure out how to validate your HTML you're much more likely not to have to care about cross browser stuff.
Oh yeah, except Javascript.
I get it working in Firefox first, that's what the boss uses. Opera last, that's what Bob uses. Har Har, just kidding Bob.
But even so, you can never be safe because the minutia of browser incompatibility and the fact that 90% of the people you ask can't really tell you which browser they're using.
Can you click help and about? (Pause) No? Oh, that right you're using IE7
And even that old standby doesn't work anymore.
My advice is to lock down IE, like it's a terminal server, and try navigating your website. If you can click on everything and read everything then you're in the clear.
If you use sIFR and someone calls you telling you you're logo is upside down, it's time to prioritize and worry about compatibility again, otherwise IE and FF and you're good to go.
A: Target none. Test against many.
A: Where I work, we test the following (in this order of priority, based on data from google analytics), all on Windows:
*
*IE 7
*IE 6
*Firefox 3
*Firefox 2
*Safari 3
We don't bother with Opera or older versions of browsers since the percentage of users is very small, however we do our best to code everything to standards, so there shouldn't be any big issues.
Of course, like Milhous said, it depends on your particular audience. YMMV.
A: The standard suite I'm used to is:
*
*IE6 (win)
*IE7 (win)
*Firefox 1.5+ (win/mac)
*Safari 2+ (win/mac)
*Opera 9+ (win/mac)
*Chrome (so far, if it clears Safari 3.0 on win, it seems to clear Chrome, too)
You could also generically claim support for IE6/7, Gecko, and WebKit... and it covers everything listed here but Opera, plus a few not listed. It's just a lot harder to test just the rendering engine and not the specific differences in browser versions and feel comfortable with the results.
A: I agree you should try and make it work in all, but if it is a new site I would seriously consider dropping support for IE6. From a development perspective it will save you hours of hair pulling if you don't need to support it.
You'll have to weigh this against your intended audience and whether you are willing to lose some customers that won't be willing (or able) to upgrade their browser.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Delphi 7 make complains about files not found I've got a BPG file that I've modified to use as a make file for our company's automated build server. In order to get it to work I had to change
Uses * Uses
unit1 in 'unit1.pas' * unit1
unit2 in 'unit2.pas' * unit2
... * ...
in the DPR file to get it to work without the compiler giving me some guff about unit1.pas not found.
This is annoying because I want to use a BPG file to actually see the stuff in my project and every time I add a new unit, it auto-jacks that in 'unitx.pas' into my DPR file.
I'm running make -f [then some options], the DPR's that I'm compiling are not in the same directory as the make file, but I'm not certain that this matters. Everything compiles fine as long as the in 'unit1.pas is removed.
A: It could come from the fact, that the search path in the IDE and the search path of the command line compiler are not the same. If you change the serach path of the command line compiler you might be able to use the exactely same source code as within the IDE.
One possibility to configure the search path for the command-line compiler is to do it in a file called dcc32.cfg. Take a look at the help, there is a short description of dcc32.cfg in the IDE-help.
A: Well this work-around worked for me.
//{$define PACKAGE}
{$ifdef PACKAGE}
uses
unit1 in 'unit1.pas'
unit2 in 'unit2.pas'
...
{$else}
uses
unit1
unit2
...
{$endif}
The only problem is whenever you add a new unit, delphi erases your ifdef package at the top.
A: Every time I have to put conditionals into a project file I do this:
program a;
uses
ACondUnits;
...
unit ACondUnits;
interface
uses
{$IFDEF UseD7MM}
Delphi7MM;
{$ELSE}
FastMM4;
{$ENDIF}
implementation
end.
Maybe this trick works in packages too. Never tried.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Should constructor variables pass direct to private fields or properties? Now this is .NET but I am sure the principal should apply to all OOP language, to simplify I take .NET as an example:
R# usually creator constructor and passing incoming variable to private field, which for me mi tend to pass it to Property.
Any opinion on how the different and what is the best practice for that?
A: Using properties is OK as long as they are not virtual/overridden. Properties are essentially methods, and you should not call virtual methods from within the constructor because the appropriate type may not be constructed yet. Microsoft has listed their own set of guidelines, just scroll down to the bottom to see the relevant guidance and code snippet illustrating the problem (they illustrate it using methods, but as I mentioned .NET properties are essentially special methods).
A: Passing the parameter through the property setter allows you to keep any validation code in one place only.
A: I would recommend sending it to the Property, rather than directly to the private field, though your actual implementation would dictate the exact conditions. For example, sometimes there are events fired when you use the Property, and you don't want to fire those events during the constructor. Or perhaps you want to circumvent the Property logic for some other reason.
A: Be careful using the Property Setter. You may have code in the setter which can cause unexpected side effects.
A: I manipulate fields inside the constructor. Fields really represent the inherent state of your object, and the constructor job is to initialize this internal state. Properties are just here for encapsulation purpose, and are a part of the public interface to the object state.
The transformation logic you apply to the constructor arguments or to the properties input values before setting the internal state of the object could be very different. Anyway, if it is the case, I used to use an explicit transformation method called from the property setter and from the constructor, instead of directly chaining constructor on the property setter.
If there is no logic at all, I can't see why you would like to use property setter inside constructor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: HTTPHandler tag in Web.Config breaks asmx Files In my ASP.Net 1.1 application, i've added the following to my Web.Config (within the System.Web tag section):
<httpHandlers>
<add verb="*" path="*.bcn" type="Internet2008.Beacon.BeaconHandler, Internet2008" />
</httpHandlers>
This works fine, and the HTTPHandler kicks in for files of type .bcn, and does its thing.. however for some reason all ASMX files stop working. Any idea why this would be the case?
Cheers
Greg
A: I got it... CQ you were on the right track.. i did need to add the .asmx handler again, but the .net 1.1 specific one. Final code is as follows:
<httpHandlers>
<add verb="*" path="*.bcn" type="Internet2008.Beacon.BeaconHandler, Internet2008" validate="false" />
<add verb="*" path="*.asmx" type="System.Web.Services.Protocols.WebServiceHandlerFactory, System.Web.Services, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false"/>
</httpHandlers>
I hope there's no other file types that are not getting handled properly because of this declaration. :|
Thanks for the help
greg
A: It sounds like it as an inherant <clear /> in it although I don't know if I've seen this behaviour before, you could just add the general handler back, let me find you the code.
<add verb="*" path="*.asmx" type="System.Web.Services.Protocols.WebServiceHandlerFactory, System.Web.Services" validate="false">
I think thats the right element, give it a shot.
EDIT: That is odd, I don't have a copy of 2003 on this machine so I can't open a 1.1 but I thought that was the right declaration. You could try adding validate="false" into each element and see if that makes a difference.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What can cause ASPNET AJAX page to revert to normal ASPX mode ? / UpdatePanel broken I am using VS 2008 with a very simple UpdatePanel scenario.
But i cannot get UpdatePanel to work and cant seem to find out why
I have in fact reverted to a very simple example to validate it is not my code:
http://ajax.net-tutorials.com/controls/updatepanel-control/
In this example I click on either button and both text links update.
I dont get any errors, the page just behaves like a normal ASPX page.
What things do i need to check. I've been googling this for an hour and not found what I need.
Edit: Works in Visual Studio web server but not in IIS
A: If it's working locally, but not when deployed to a remote server, that usually indicates that you're using ASP.NET 2.0 and the ASP.NET AJAX extensions aren't installed on the remote server.
If it's a server you have administrative control over, you can download the installer here: http://www.microsoft.com/downloads/details.aspx?FamilyID=ca9d90fa-e8c9-42e3-aa19-08e2c027f5d6&displaylang=en
If it's a web host, tell them to get their act together.
A: Another option would be to check your web.config. You could for example create an new Ajax enabled ASP.NET website from Visual Studio. This will generate a correct web.config. Copy over all non-ajax sections from your existing web.config and you're set. This worked for me.
-Edoode
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do you get the state to which a (.net) form will be restored? I have a windows application and I want to save the size, position and WindowState between sessions. There is one caveat however. If the state when closing is minimized then I want to restore to the last visible state, i.e. Normal if it was normal before being minimized or maximized if it was maximized.
Is there a property, check, call do do this?
A: Here's an example on form persistance and saving window state between sessions.
As for saving the state prior to window being minimized, that's something that you need to handle yourself by 'remembering' last two window states, there's no property/event that can do that for you.
A: You could use application settings (user scope) and when the Form_Closing event is triggered on your form, you can choose how to modify the settings before you save them with Properties.Settings.Default.Save();
A: You can "remember" the restored position before minimizing/maximizing.
You can restore the window before saving the position, this has the disadvantage of making your window blink before closing.
Or you can call the Win32 function GetWindowPlacment via InterOp, it returns the restored rectangle.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is it expected that all the units of a Project Group in Delphi 7 to be in one folder? Maybe this applied to other Delphi's (I've only used 7). We've got our code broken up so that nearly every DLL in our fairly massive app is in a different folder.
99% of the open source stuff I've downloaded to plug into Delphi have had all their source munged into one folder.
It seems like this was an assumption that the developers of Delphi made about the coding practices of it's users that may be non-obvious.
A: I don't think so. In fact, In more recent versions they've added features to the project manager to make it easier to deal with the fact that code is spread around different directories (such as the flatten directories option), so I think it is accepted that this is how many people organize their code.
I suspect it's more to do with projects growing organically over time, and whether anyone takes the time to tidy up.
A: I for one definitely do not put all the sources into one directory but rather keep them in groups that have something in common. e.g. I use subversion externals quite extensively
(see http://www.dummzeuch.de/delphi/subversion/english.html , the section about externals).
A: I prefer different modules to be hosted on different folders, then have a common folder for units that is shared among different modules, makes management easy. e.g
myClientServerApp:(parent)
Client folder :(child)
server filder (child)
lib - (child)
A: Back in DELPHI 7 I also had all files in one folder. It has easy for small projects, but very hard for med to big one.
So I began to create a folder structure for all DELPHI projects small or big.
Over the year I am trying to improve, this folder structure, and every new project I make a small improvement so that it is simpler, logical, and more organized.
This day I am trying to make some parts of it sharable to several project. Its work in progress.
A: It would seem that having all the units in one folder would save you headaches in doubly named units.
*On the other hand, it might be handier to keep your projects in different folders when checking in and out of your version control.
*On the other hand it really doesn't promote code reuse to have them separated out like that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Advanced searching in Vim Is there a way to search for multiple strings simultaneously in Vim? I recall reading somewhere that it was possible but somehow forgot the technique.
So for example, I have a text file and I want to search for "foo" and "bar" simultaneously (not necessarily as a single string, can be in different lines altogether).
How do I achieve that?
A: Actually I found the answer soon after I posted this (yes I did google earlier but was unable to locate it. Probably was just searching wrong)
The right solution is
/(foo\|bar)
@Paul Betts: The pipe has to be escaped
A: /^joe.*fred.*bill/ : find joe AND fred AND Bill (Joe at start of line)
/fred\|joe : Search for FRED OR JOE
A: Vim supports regular expressions by starting in command mode with a '/'.
So using something like "/(foo\|bar)" (as was stated before) would solve the problem. It's good to know why that works and what you are using (regular expressions).
A: /(foo|bar)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: SQL Server Provisioning Tool Running under Server 2008, SQL Server 2005 SP2 has the option to "Add New Administrator" from the SQL Server 2005 Surface Ara Configuration. This launches the SQL Server 2005 User Provisioning Tool for Vista. You are then able to grant privileges such as Server SysAdmin role etc.
When I do this the privileges are copied across and appear to be saved but when I return to this tool it is clear that they have not been saved. Anybody else experience this or know how to "make it stick" ?
A: It seems that the tool is simply not verifying that the privileges are already granted, and it's granting the privileges on every run, which is ok with me (you could check the members of the sysadmin server role.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Linux/C++ programmer to C#/Windows programmer I have been coding exclusively for a while now on Linux with C++. In my current job, it's a Windows shop with C# as main language. I've retrained myself to use Visual Studio instead of emacs ( main reason is the integrated debugger in VC, emacs mode in VC helps ), setup Cygwin ( since I cannot live without a shell ) and pickup the ropes of managed language. What tools, books, website ( besides MSDN ) or pitfalls do you think I should check to make myself a more efficient Windows/C# coder?
A: Since you already know how to program in C++, check out:
A Programmers Introduction to C# 2.0, by Eric Gunnerson and Nick Wienholt
Nice balance between language reference and general .net information.
Similarly:
Essential .NET, Volume I: The Common Language Runtime by Don Box and Chris Sells
Really interesting "under the covers" net book (but written for .net version 1 so may be a little out of date).
A: Read about garbage collection in .Net. People coming from C++ land are used to the explicit memory allocation and management. In C# explicit memory management is virtually non-existing.
Another topic you should check out is the difference between C# generics and C++ templates. I don't have a good link for that one, though.
Depending on the product you are working on, you might have to resort to calling Win32 API functions from C#. This is done through P/Invoke, so you might want to read a bit about it as well. And if you have to actually use it, http://pinvoke.net is very useful collection of C# declarations for most of the Win32 APIs.
You might also want to learn at least the basics of COM, as very often C#/.Net applications choose/need to reuse components from third party vendors, which are often implemented as COM components. COM is a complex topic though. My favorite books on COM and Essential COM by Don Box and Professional DCOM by Roger rimes. I would borrow these from a library, as all you need to read are the first few chapters (unless you want to go in depth).
A basic understanding of windows, messages and message queues is necessary, if you are going to write client applications. You will be using Winforms of WPF/XAML for these, and both technologies do a good job of isolating the details from you; however to be able to write good code you need to know what is going behind the scenes. I am not sure what a good book would be for that, but MSDN has lot of information.
A: My background was predominantly C/Unix and Python with some java and dusty 2000-vintage VB6 when I first used C#. I was familiar with managed runtimes from the work I had done with Java and the .Net API's have a somewhat similar look-and-feel to earlier MS API's.
I found Troelsen's Pro C# and the .Net 2.0 Platform to be a really good C#/.Net resource. There are more recent editions out now.
A: Check out petzold's dotnet book zero. This might help.
A: The first things to consider when switching from C++ to C# the fact that mostly share some of the surface syntax, but the difference of programming paradigms gets bigger and bigger as you dig in more into .Net.
Get to know the C# core programming paradigms before starting to program else you might fall in the trap of writing C++ programs in C#, which isn't the best idea by long stretch.
The most important things to get accustomed to are:
*
*Automatic memory management and garbage collection including the dispose pattern. Learn the basics and pitfalls.
*What are classes, interfaces, structs and primitives in .Net, and how they behave compared to C++.
*Events, delegates, properties, lambda expressions all of which are somewhat new concepts when coming from C++.
*.Net generics and differences between templates.
*strings, arrays, custom attributes, reflection, exceptions and threading basics in .Net, all of these are heavily used everywhere in .Net, and you must learn their intricacies to use them effectively.
*GUI programming in Winforms and ASP.Net (and maybe after there WPF), components, controtrols, databinding. The .Net GUI model.
First start with a generic .Net book that introduces you to all of these concepts. I recommend a book over reading tutorials and articles first of all so you can have a big complete picture of .Net at the end. Articles on the internet might not achieve this.
Best generic .Net book I've read:
Professional C#, 3rd edition. by Simon Robinson, Christian Nagel, Karli Watson, Jay Glynn, Morgan Skinner, Bill Evjen
And second, since you're from a C++ background, and you are used to working close to the metal and thinking in way that is close to how hardware works (raw memory management (pointers, mem allocations, etc) I can only recommend one book that will really demystify what .Net is and what it does :
CLR Via C# by Jeffrey Richter
I can't stress enough how good this book is for every .Net developer, especially when coming from C++ and at the same time being one of the best .Net books I've read. The book is a pure pleasure to read and covers topics from :
*
*.Net execution model (from MSIL to native code)
*Memory management (how the .net runtime and garbage collector manages memory, heap layout, memory generations, finalization, large object heap)
*Designing types
*Assembly loading, reflection, application domains
*and many more ...
This is my best advice I could give to anyone on their way to become an expert C# developer in the shortest time possible.
A: Get yourself a copy of Resharper. It's probably the single best productivity tool out there for straight up coding.
A: Shouldn't be the base the CLR? And shouldn't it be unimportant which .NET language is used? If I look through the .NET docs I can decide if I'd like to see the stuff in VB.Net, C# or C++. So if you know C++ why shouldn't you use "managed C++"?
Regards
Friedrich
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Integrated Version Control for Visual Studio What version control systems have you used to manage projects developed with Visual Studio (2005)? What would you recommend and Why? What limitations have you found with your top rated version control system?
A: Covered many times
Search for 'Visual Studio Source Control' sorted by Votes
A: I've used SourceGear's Vault - it integrates nicely with VS as well as with FogBugz.
A: We really need more information to make good recommendations. For example, ClearCase is amazingly powerful but unless you're in a decent sized development studio it would be immensly wasteful, and likely reduce overall productivity.
For personal work I like SVN, but that's mainly personal taste and being familar with it.
A: Whatever you do, don't learn Git - after you learn it, you realize how every other SCM is trash and you'll hate every minute you have to use Perforce, SVN, or (God help you) VSS/TFS
A: I used to use SourceSafe, but have made the switch to Subversion and will never go back.
I use TortoiseSVN with VisualSVN. VisualSVN provides the IDE integration with Visual Studio so it feels much like SourceSafe.
The first big benefit is that it works very well over HTTP so I can work with a distributed team. I host my files with CVSDude.
Secondly, the fact that you don't need to check out a file before editing is a huge benefit, particularly when working with files that sit outside the Visual Studio project.
Thirdly, my source code actually feels safe. Ironically it never did with SourceSafe...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I create an installer for a .Net website, Windows Service, and more? I need to create an installer program that will do install the following:
*
*ASP.Net Website
*Windows Service
*SQL Express if it isn't installed
and the user doesn't have a SQL
Server
*Dundas Charts
*ASP.Net AJAX v.1.0
*ReportViewer control (for 2.0 Framework)
*Check Framework prerequisites (2.0)
*Configure IIS and app.config (data connection strings, etc.)
Is it realistic to be able to do this with a VS Setup Project? Or, should I be looking at other install tools?
A: You can use WiX
A: Most of the best open source tools and programs for Windows are distributed using NSIS
ASP.Net Website ( no listing - bug in stackoverflow - code not formatting while list above .. )
CreateDirectory $INSTDIR
SetOutPath $INSTDIR
; HERE UNZIP ACTUALLY THE FILES (ADD *.js files if needed )
; PACK ALL THE FILES EXCEPT THOSE WITH FILE EXTENSIONS after the /x
File /r /x *.suo /x *.MDF /x *.exclude /x *.ldf /x *.pl /x *.nsis /x *.cmd "siteFolderName\*.*"
*
*Windows Service and here
*SQL Express if it isn't installed and the user doesn't have a SQL Server
*Dundas Charts - call silent installer
*ASP.Net AJAX v.1.0 - call silent installer
*ReportViewer control (for 2.0 Framework)
*Check Framework prerequisites (2.0) - check NSIS system func
*Configure IIS and app.config (data connection strings, etc.) - I do this with preconfiguring the files in the installer and writing those during install time
A: I used WiX. I banged my head against the wall for a few days, then got the hang of it. The WiX mailing list is key if you have never worked with WiX before.
A: As my previous speakers already sayed i also would recommend you to use WiX. Visual Studio (till version 2008) also has a proprietary built-in installer system. But i would avoid using it, because you need to use the full Visual Studio IDE to compile it. You cannot use command line build scripts, and therefore it is useless when working with a automated build server.
WiX gives you all the flexibility you might need. And as far as i know, Microsoft will use it as standard installer tool in Visual Studio 2010.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Best way to do Version Control for MS Excel What version control systems have you used with MS Excel (2003/2007)? What would you recommend and Why? What limitations have you found with your top rated version control system?
To put this in perspective, here are a couple of use cases:
*
*version control for VBA modules
*more than one person is working on a Excel spreadsheet and they may be making changes to the same worksheet, which they want to merge and integrate. This worksheet may have formulae, data, charts etc
*the users are not too technical and the fewer version control systems used the better
*Space constraint is a consideration. Ideally only incremental changes are saved rather than the entire Excel spreadsheet.
A: It depends whether you are talking about data, or the code contained within a spreadsheet. While I have a strong dislike of Microsoft's Visual Sourcesafe and normally would not recommended it, it does integrate easily with both Access and Excel, and provides source control of modules.
[In fact the integration with Access, includes queries, reports and modules as individual objects that can be versioned]
The MSDN link is here.
A: I'm not aware of a tool that does this well but I've seen a variety of homegrown solutions. The common thread of these is to minimise the binary data under version control and maximise textual data to leverage the power of conventional scc systems. To do this:
*
*Treat the workbook like any other application. Seperate logic, config and data.
*Separate code from the workbook.
*Build the UI programmatically.
*Write a build script to reconstruct the workbook.
A: I've just setup a spreadsheet that uses Bazaar, with manual checkin/out via TortiseBZR. Given that the topic helped me with the save portion, I wanted to post my solution here.
The solution for me was to create a spreadsheet that exports all modules on save, and removes and re-imports the modules on open. Yes, this could be potentially dangerous for converting existing spreadsheets.
This allows me to edit the macros in the modules via Emacs (yes, emacs) or natively in Excel, and commit my BZR repository after major changes. Because all the modules are text files, the standard diff-style commands in BZR work for my sources except the Excel file itself.
I've setup a directory for my BZR repository, X:\Data\MySheet. In the repo are MySheet.xls and one .vba file for each of my modules (ie: Module1Macros). In my spreadsheet I've added one module that is exempt from the export/import cycle called "VersionControl". Each module to be exported and re-imported must end in "Macros".
Contents of the "VersionControl" module:
Sub SaveCodeModules()
'This code Exports all VBA modules
Dim i%, sName$
With ThisWorkbook.VBProject
For i% = 1 To .VBComponents.Count
If .VBComponents(i%).CodeModule.CountOfLines > 0 Then
sName$ = .VBComponents(i%).CodeModule.Name
.VBComponents(i%).Export "X:\Tools\MyExcelMacros\" & sName$ & ".vba"
End If
Next i
End With
End Sub
Sub ImportCodeModules()
With ThisWorkbook.VBProject
For i% = 1 To .VBComponents.Count
ModuleName = .VBComponents(i%).CodeModule.Name
If ModuleName <> "VersionControl" Then
If Right(ModuleName, 6) = "Macros" Then
.VBComponents.Remove .VBComponents(ModuleName)
.VBComponents.Import "X:\Data\MySheet\" & ModuleName & ".vba"
End If
End If
Next i
End With
End Sub
Next, we have to setup event hooks for open / save to run these macros. In the code viewer, right click on "ThisWorkbook" and select "View Code". You may have to pull down the select box at the top of the code window to change from "(General)" view to "Workbook" view.
Contents of "Workbook" view:
Private Sub Workbook_Open()
ImportCodeModules
End Sub
Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean)
SaveCodeModules
End Sub
I'll be settling into this workflow over the next few weeks, and I'll post if I have any problems.
Thanks for sharing the VBComponent code!
A: Working upon @Demosthenex work, @Tmdean and @Jon Crowell invaluable comments! (+1 them)
I save module files in git\ dir beside workbook location. Change that to your liking.
This will NOT track changes to Workbook code. So it's up to you to synchronize them.
Sub SaveCodeModules()
'This code Exports all VBA modules
Dim i As Integer, name As String
With ThisWorkbook.VBProject
For i = .VBComponents.count To 1 Step -1
If .VBComponents(i).Type <> vbext_ct_Document Then
If .VBComponents(i).CodeModule.CountOfLines > 0 Then
name = .VBComponents(i).CodeModule.name
.VBComponents(i).Export Application.ThisWorkbook.Path & _
"\git\" & name & ".vba"
End If
End If
Next i
End With
End Sub
Sub ImportCodeModules()
Dim i As Integer
Dim ModuleName As String
With ThisWorkbook.VBProject
For i = .VBComponents.count To 1 Step -1
ModuleName = .VBComponents(i).CodeModule.name
If ModuleName <> "VersionControl" Then
If .VBComponents(i).Type <> vbext_ct_Document Then
.VBComponents.Remove .VBComponents(ModuleName)
.VBComponents.Import Application.ThisWorkbook.Path & _
"\git\" & ModuleName & ".vba"
End If
End If
Next i
End With
End Sub
And then in Workbook module:
Private Sub Workbook_Open()
ImportCodeModules
End Sub
Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean)
SaveCodeModules
End Sub
A: Taking @Demosthenex 's answer a step further, if you'd like to also keep track of the code in your Microsoft Excel Objects and UserForms you have to get a little bit tricky.
First I altered my SaveCodeModules() function to account for the different types of code I plan to export:
Sub SaveCodeModules(dir As String)
'This code Exports all VBA modules
Dim moduleName As String
Dim vbaType As Integer
With ThisWorkbook.VBProject
For i = 1 To .VBComponents.count
If .VBComponents(i).CodeModule.CountOfLines > 0 Then
moduleName = .VBComponents(i).CodeModule.Name
vbaType = .VBComponents(i).Type
If vbaType = 1 Then
.VBComponents(i).Export dir & moduleName & ".vba"
ElseIf vbaType = 3 Then
.VBComponents(i).Export dir & moduleName & ".frm"
ElseIf vbaType = 100 Then
.VBComponents(i).Export dir & moduleName & ".cls"
End If
End If
Next i
End With
End Sub
The UserForms can be exported and imported just like VBA code. The only difference is that two files will be created when a form is exported (you'll get a .frm and a .frx file for each UserForm). One of these holds the software you've written and the other is a binary file which (I'm pretty sure) defines the layout of the form.
Microsoft Excel Objects (MEOs) (meaning Sheet1, Sheet2, ThisWorkbook etc) can be exported as a .cls file. However, when you want to get this code back into your workbook, if you attempt to import it the same way you would a VBA module, you'll get an error if that sheet already exists in the workbook.
To get around this issue, I decided not to try to import the .cls file into Excel, but to read the .cls file into excel as a string instead, then paste this string into the empty MEO. Here is my ImportCodeModules:
Sub ImportCodeModules(dir As String)
Dim modList(0 To 0) As String
Dim vbaType As Integer
' delete all forms, modules, and code in MEOs
With ThisWorkbook.VBProject
For Each comp In .VBComponents
moduleName = comp.CodeModule.Name
vbaType = .VBComponents(moduleName).Type
If moduleName <> "DevTools" Then
If vbaType = 1 Or _
vbaType = 3 Then
.VBComponents.Remove .VBComponents(moduleName)
ElseIf vbaType = 100 Then
' we can't simply delete these objects, so instead we empty them
.VBComponents(moduleName).CodeModule.DeleteLines 1, .VBComponents(moduleName).CodeModule.CountOfLines
End If
End If
Next comp
End With
' make a list of files in the target directory
Set FSO = CreateObject("Scripting.FileSystemObject")
Set dirContents = FSO.getfolder(dir) ' figure out what is in the directory we're importing
' import modules, forms, and MEO code back into workbook
With ThisWorkbook.VBProject
For Each moduleName In dirContents.Files
' I don't want to import the module this script is in
If moduleName.Name <> "DevTools.vba" Then
' if the current code is a module or form
If Right(moduleName.Name, 4) = ".vba" Or _
Right(moduleName.Name, 4) = ".frm" Then
' just import it normally
.VBComponents.Import dir & moduleName.Name
' if the current code is a microsoft excel object
ElseIf Right(moduleName.Name, 4) = ".cls" Then
Dim count As Integer
Dim fullmoduleString As String
Open moduleName.Path For Input As #1
count = 0 ' count which line we're on
fullmoduleString = "" ' build the string we want to put into the MEO
Do Until EOF(1) ' loop through all the lines in the file
Line Input #1, moduleString ' the current line is moduleString
If count > 8 Then ' skip the junk at the top of the file
' append the current line `to the string we'll insert into the MEO
fullmoduleString = fullmoduleString & moduleString & vbNewLine
End If
count = count + 1
Loop
' insert the lines into the MEO
.VBComponents(Replace(moduleName.Name, ".cls", "")).CodeModule.InsertLines .VBComponents(Replace(moduleName.Name, ".cls", "")).CodeModule.CountOfLines + 1, fullmoduleString
Close #1
End If
End If
Next moduleName
End With
End Sub
In case you're confused by the dir input to both of these functions, that is just your code repository! So, you'd call these functions like:
SaveCodeModules "C:\...\YourDirectory\Project\source\"
ImportCodeModules "C:\...\YourDirectory\Project\source\"
A: I use git, and today I ported this (git-xlsx-textconv) to Python, since my project is based on Python code, and it interacts with Excel files. This works for at least .xlsx files, but I think it will work for .xls too. Here's the github link. I wrote two versions, one with each row on its own line, and another where each cell is on its own line (the latter was written because git diff doesn't like to wrap long lines by default, at least here on Windows).
This is my .gitconfig file (this allows the differ script to reside in my project's repo):
[diff "xlsx"]
binary = true
textconv = python `git rev-parse --show-toplevel`/src/util/git-xlsx-textconv.py
if you want the script to be available for many different repos, then use something like this:
[diff "xlsx"]
binary = true
textconv = python C:/Python27/Scripts/git-xlsx-textconv.py
my .gitattributes file:
*.xlsx diff=xlsx
A: TortoiseSVN is an astonishingly good Windows client for the Subversion version control system. One feature which I just discovered that it has is that when you click to get a diff between versions of an Excel file, it will open both versions in Excel and highlight (in red) the cells that were changed. This is done through the magic of a vbs script, described here.
You may find this useful even if NOT using TortoiseSVN.
A: One thing you could do is to have the following snippet in your Workbook:
Sub SaveCodeModules()
'This code Exports all VBA modules
Dim i%, sName$
With ThisWorkbook.VBProject
For i% = 1 To .VBComponents.Count
If .VBComponents(i%).CodeModule.CountOfLines > 0 Then
sName$ = .VBComponents(i%).CodeModule.Name
.VBComponents(i%).Export "C:\Code\" & sName$ & ".vba"
End If
Next i
End With
End Sub
I found this snippet on the Internet.
Afterwards, you could use Subversion to maintain version control. For example by using the command line interface of Subversion with the 'shell' command within VBA. That would do it. I'm even thinking of doing this myself :)
A: I would like to recommend a great open-source tool called Rubberduck that has version control of VBA code built in. Try it!
A: If you are looking at an office setting with regular office non technical users than Sharepoint is a viable alternative. You can setup document folders with version control enabled and checkins and checkouts. Makes it freindlier for regular office users.
A: in response to mattlant's reply - sharepoint will work well as a version control only if the version control feature is turned on in the document library.
in addition be aware that any code that calls other files by relative paths wont work. and finally any links to external files will break when a file is saved in sharepoint.
A: After searching for ages and trying out many different tools, I've found my answer to the vba version control problem here: https://stackoverflow.com/a/25984759/2780179
It's a simple excel addin for which the code can be found here
There are no duplicate modules after importing. It exports your code automatically, as soon as you save your workbook, without modifying any existing workbooks.
It comes together with a vba code formatter.
A: Let me summarise what you would like to version control and why:
*
*What:
*
*Code (VBA)
*Spreadsheets (Formulae)
*Spreadsheets (Values)
*Charts
*...
*Why:
*
*Audit log
*Collaboration
*Version comparison ("diffing")
*Merging
As others have posted here, there are a couple of solutions on top of existing version control systems such as:
*
*Git
*Mercurial
*Subversion
*Bazaar
If your only concern is the VBA code in your workbooks, then the approach Demosthenex above proposes or VbaGit (https://github.com/brucemcpherson/VbaGit) work very well working and are relatively simple to implement. The advantages are that you can rely on well proven version control systems and chose one according to your needs (have a look at https://help.github.com/articles/what-are-the-differences-between-svn-and-git/ for a brief comparison between Git and Subversion).
If you not only worry about code but also about the data in your sheets ("hardcoded" values and formula results), you can use a similar strategy for that: Serialise the contents of your sheets into some text format (via Range.Value) and use an existing version control system. Here's a very good blog post about this: https://wiki.ucl.ac.uk/display/~ucftpw2/2013/10/18/Using+git+for+version+control+of+spreadsheet+models+-+part+1+of+3
However, spreadsheet comparison is a non-trivial algorithmic problem. There are a few tools around, such as Microsoft's Spreadsheet Compare (https://support.office.com/en-us/article/Overview-of-Spreadsheet-Compare-13fafa61-62aa-451b-8674-242ce5f2c986), Exceldiff (http://exceldiff.arstdesign.com/) and DiffEngineX (https://www.florencesoft.com/compare-excel-workbooks-differences.html). But it's another challenge to integrate these comparison with a version control system like Git.
Finally, you have to settle on a workflow that suits your needs. For a simple, tailored Git for Excel workflow, have a look at https://www.xltrail.com/blog/git-workflow-for-excel.
A: Use any of the standard version control tools like SVN or CVS. Limitations would depend on whats the objective. Apart from a small increase in size of the repository, i did'nt face any issues
A: I have been looking into this too. It apears that the latest Team Foundation Server 2010 may have an Excel Add-In.
Here is a clue:
http://team-foundation-server.blogspot.com/2009/07/tf84037-there-was-problem-initializing.html
A: Actually there only a handful of solutions to track and compare changes in macro code - most of those were named here already. I have been browsing the web and came across this new tool worth mentioning:
XLTools Version Control for VBA macros
*
*version control for Excel sheets and VBA modules
*preview and diff changes before committing a version
*great for collaborative work of several users on the same file (track who changed what/when/comments)
*compare versions and highlight changes in code line-by-line
*suitable for users who are not tech-savvy, or Excel-savvy for that matter
*version history is stored in Git-repository on your own PC - any version can be easily recovered
VBA code versions side by side, changes are visualized
A: There is also a program called Beyond Compare that has a quite nice Excel file compare. I found a screenshot in chinese that briefly shows this:
Original image source
There is a 30 day trial on their page
A: You might have tried using Microsoft's Excel XML in zip container (.xlsx and .xslm) for version control and found the vba was stored in vbaProject.bin (which is useless for version control).
The solution is simple.
*
* Open the excel file with LibreOffice Calc
* In LibreOffice Calc
*
* File
* Save as
* Save as type: ODF Spreadsheet (.ods)
* Close LibreOffice Calc
* rename the new file's file extension from .ods to .zip
* create a folder for the spreadsheet in a GIT maintained area
* extract the zip into it's GIT folder
* commit to GIT
When you repeat this with the next version of the spreadsheet you'll have to make sure you make the folder's files exactly match those in the zip container (and don't leave any deleted files behind).
A: I found a very simple solution to this question which meets my needs. I add one line to the bottom of all of my macros which exports a *.txt file with the entire macro code each time it is run. The code:
ActiveWorkbook.VBProject.VBComponents("moduleName").Export"C:\Path\To\Spreadsheet\moduleName.txt"
(Found on Tom's Tutorials, which also covers some setup you may need to get this working.)
Since I'll always run the macro whenever I'm working on the code, I'm guaranteed that git will pick up the changes. The only annoying part is that if I need to checkout an earlier version, I have to manually copy/paste from the *.txt into the spreadsheet.
A: It depends on what level of integration you want, I've used Subversion/TortoiseSVN which seems fine for simple usage. I have also added in keywords but there seems to be a risk of file corruption. There's an option in Subversion to make the keyword substitutions fixed length and as far as I understand it will work if the fixed length is even but not odd. In any case you don't get any useful sort of diff functionality, I think there are commercial products that will do 'diff'. I did find something that did diff based on converting stuff to plain text and comparing that, but it wasn't very nice.
A: It should work with most VCS (depending on other criteria you might choose SVN, CVS, Darcs, TFS, etc), however it will actually the complete file (because it is a binary format), meaning that the "what changed" question is not so easy to answer.
You can still rely on log messages if people complete them, but you might also try the new XML based formats from Office 2007 to gain some more visibility (although it would still be hard to weed through the tons of XML, plus AFAIK the XML file is zipped on the disk, so you would need a pre-commit hook to unzip it for text diff to work correctly).
A: I wrote a revision controlled spreadsheet using VBA.
It is geared more for engineering reports where you have multiple people working on a Bill Of Material or Schedule and then at some point in time you want to create a snapshot revision that shows adds, del and updates from the previous rev.
Note: it is a macro enabled workbook that you need to sign in to download from my site (you can use OpenID)
All the code is unlocked.
Rev Controlled Spreadsheet
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "180"
}
|
Q: Standardized Error Classification & Handling I need to standardize on how I classify and handle errors/exceptions 'gracefully'.
I currently use a process by which I report the errors to a function passing an error-number, severity-code, location-info and extra-info-string. This function returns boolean true if the error is fatal and the app should die, false otherwise. As part of it's process, apart from visual-feedback to the user, the function also log-to-file errors of above some severity-level.
Error-number indexes an array of strings explaining the type of error, e.g.:'File access','User Input','Thread-creation','Network access', etc. Severity-code is binary OR of 0,1,2 or 4, 0=informative, 1=user_retry, 2=cannot_complete, 4=cannot_continue. Location-info is module & function, and Extra-info is parameter- and local variable values.
I want to make this into a standard way of error-handling that I can put in a library and re-use in all my apps. I mainly use C/C++ on Linux, but would want to use the resultant library with other languages/platforms as well.
*
*An idea is to extend the error-type
array to indicate some default
behavior for a given severity-level,
but should this then become the
action taken and give no options to
the user?
*Or: should such extension be a
sub-array of options that the user
need to pick from? The problem with
this is that the options would of
necessity be generalized
programming-related options that may
very-well completely baffle an
end-user.
*Or: should each app that uses the
error-lib routine pass along its own
array of either errors or default
behaviors - but this will defeat the
purpose of the library...
*Or: should the severity-levels be
handled in each app?
Or: what do you suggest? How do you handle errors? How can I improve this?
A: How you handle the errors really depends upon the application.A Web application has a different Error-Catching mechanism than A Desktop Application, and both of those differ drastically to an asynchronous messaging system.
That being said the a common practice in error handling is to handle it at the lowest possible level where it can be dealt with. This usually means the Application Layer or the GUI.
I like the severity levels. Perhaps you can have a pluggable Error-collection library with different error output providers and severity level provider.
Output providers could include things like a logginProvider and IgnoreErrorsProvider.
Severity providers would probably be something implemented by each project since severity levels are usually determined by that type of project in which it occurs. (For example, network connection issues are more severe for a banking application than for a contact management system).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Restrict a string to whitelisted characters using XSLT 1.0 Question
Using XSLT 1.0, given a string with arbitrary characters how can I get back a string that meets the following rules.
*
*First character must be one of these: a-z, A-Z, colon, or underscore
*All other characters must be any of those above or 0-9, period, or hyphen
*If any character does not meet the above rules, replace it with an underscore
Background
In an XSLT I'm translating some attributes into elements, but I need to be sure the attribute doesn't contain any values that can't be used in an element name. I don't care much about the integrity of the attribute being converted to the name as long as it's being converted predictably. I also don't need to compensate for every valid character in an element name (there's a bunch).
The problem I was having was with the attributes having spaces coming in, which the translate function can easily convert to underscores:
translate(@name,' ','_')
But soon after I found some of the attributes using slashes, so I have to add that now too. This will quickly get out of hand. I want to be able to define a whitelist of allowed characters, and replace any non-allowed characters with an underscore, but translate works as by replacing from a blacklist.
A: You could write a recursive template to do this, working through the characters in the string one by one, testing them and changing them if necessary. Something like:
<xsl:template name="normalizeName">
<xsl:param name="name" />
<xsl:param name="isFirst" select="true()" />
<xsl:if test="$name != ''">
<xsl:variable name="first" select="substring($name, 1, 1)" />
<xsl:variable name="rest" select="substring($name, 2)" />
<xsl:choose>
<xsl:when test="contains('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:_', $first) or
(not($first) and contains('0123456789.-', $first))">
<xsl:value-of select="$first" />
</xsl:when>
<xsl:otherwise>
<xsl:text>_</xsl:text>
</xsl:otherwise>
</xsl:choose>
<xsl:call-template name="normalizeName">
<xsl:with-param name="name" select="$rest" />
<xsl:with-param name="isFirst" select="false()" />
</xsl:call-template>
</xsl:if>
</xsl:template>
However, there is shorter way of doing this if you're prepared for some hackery. First declare some variables:
<xsl:variable name="underscores"
select="'_______________________________________________________'" />
<xsl:variable name="initialNameChars"
select="'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ:_'" />
<xsl:variable name="nameChars"
select="concat($initialNameChars, '0123456789.-')" />
Now the technique is to take the name and identify the characters that aren't legal by replacing all the characters in the name that are legal with nothing. You can do this with the translate() function. Once you've got the set of illegal characters that appear in the string, you can replace them with underscores using the translate() function again. Here's the template:
<xsl:template name="normalizeName">
<xsl:param name="name" />
<xsl:variable name="first" select="substring($name, 1, 1)" />
<xsl:variable name="rest" select="substring($name, 2)" />
<xsl:variable name="illegalFirst"
select="translate($first, $initialNameChars, '')" />
<xsl:variable name="illegalRest"
select="translate($rest, $nameChars, '')" />
<xsl:value-of select="concat(translate($first, $illegalFirst, $underscores),
translate($rest, $illegalRest, $underscores))" />
</xsl:template>
The only thing you have to watch out for is that the string of underscores needs to be long enough to cover all the illegal characters that might appear within a single name. Making it the same length as the longest name you're likely to encounter will do the trick (though probably you could get away with it being a lot shorter).
UPDATE:
I wanted to add to this answer. In order to generate required length underscore string you can use this template.
<!--Generate string with given number of replacement-->
<xsl:template name="gen-replacement">
<xsl:param name="n"/>
<xsl:if test="$n > 0">
<xsl:call-template name="gen-replacement">
<xsl:with-param name="n" select="$n - 1"/>
</xsl:call-template>
<xsl:text>_</xsl:text>
</xsl:if>
</xsl:template>
And call it when you need to generate underscores:
<xsl:variable name="replacement"><xsl:call-template name="gen-replacement"><xsl:with-param name="n" select="string-length($value)"/></xsl:call-template></xsl:variable>
A: As far as Im aware XSLT 1.0 doesnt have a builtin for this. XSLT 2.0 allows you to use regexes, though Im sure your all too aware of that.
If, on the off chance your using the MS parser, you can write .NET extension libraries that you can leverage in your XSLT and I wrote about this some months ago here.
If your using something like Saxon, I am pretty certain they also provide ways of coding your own extensions, and they may indeed have an extension of their own already, but Im unfamiliar with that engine.
Hope this helps.
A: As another alternative there is a string function that might work for you in the XSLT standard library. http://xsltsl.sourceforge.net/string.html#template.str:string-match
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you prefer to organize exception definitions? I almost feel embarrassed to ask, but I always struggle with how to organize exception definitions. The three ways I've done this before are:
*
*Use the file-per-class rule. I'm not crazy about this because it clutters up my directory structure and namespaces. I could organize them into subdirectories and segment namespaces for them, but I don't really like that, and that's not how the standard libraries usually do it.
*put the definitions in a file containing the related class(es). I don't really like this either because then exception definitions are scattered about and may be hard to find without the aid of a code navigation tool.
*One file with all the exception definitions for a namespace or "package" of related classes. This is kind of a compromise between the above two, but it may leave situations in which it's hard to tell which exceptions "belong" to a particular group of classes or set of functionality.
I don't really like any of the above, but is there a sort of best-practice that I haven't picked up on that would be better?
Edit: Interesting. From the "Programming Microsoft Visual C# 2008: The Language", Donis suggests:
For convenience and maintainability,
deploy application exceptions as a
group in a separate assembly. (p. 426)
I wonder why?
A: I tend to place the Exceptions one per file in the same package as the objects which produce them, in individual files. This is the paradigm used by the Java APIs and the .Net libraries, so most people are at least familiar with the organization of the objects at that point.
Modern IDE's do a good enough job of keeping track of the files in a directory that the benefits of having the class in a file of the same name tend out outweigh the value of having fewer files.
A: In C++, I've always defined classes that I will use as exceptions in the same namespace as the classes which throw them, co-locating them in the same header. For general purpose exceptions, that will be shared between classes, I use a single header to group them together, along with other utility functions and classes.
I don't believe there's any one correct way of doing things in this regard; it's whatever works for you in your context.
A: I use the following approaches:
*
*Exception class in a separate file: when it's generic and can be thrown by more than one class
*Exception class together with the class throwing it: when there is only one such class. This makes sense because the exception is part of that class' interface
A variant on the latter is making the exception a member of the throwing class. I used to do that but found it cumbersome.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What does "Cannot evaluate expression because the code of the current method is optimized." mean? I wrote some code with a lot of recursion, that takes quite a bit of time to complete. Whenever I "pause" the run to look at what's going on I get:
Cannot evaluate expression because the code of the current method is optimized.
I think I understand what that means. However, what puzzles me is that after I hit step, the code is not "optimized" anymore, and I can look at my variables. How does this happen? How can the code flip back and forth between optimized and non-optimzed code?
A: This drove me crazy. I tried attaching with Managed and Native code - no go.
This worked for me and I was finally able to evaluate all expressions :
*
*Go into Project / Properties
*Select the Build tab and click
Advanced...
*Make sure Debug Info is set to "full"
(not pdb-only)
*Debug your project - voila!
A: The below worked for me, thanks @Vin.
I had this issue when I was using VS 2015. My solution: configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
A: While the Debug.Break() line is on top of the callstack you can't eval expressions. That's because that line is optimized. Press F10 to move to the next line - a valid line of code - and the watch will work.
A: The Debugger uses FuncEval to allow you to "look at" variables. FuncEval requires threads to be stopped in managed code at a GarbageCollector safe point. Manually "pausing" the run in the IDE causes all threads to stop as soon as possible. Your highly recursive code will tend to stop at an unsafe point. Hence, the debugger is unable to evaluate expressions.
Pressing F10 will move to the next Funceval Safe point and will enable function evaluation.
For further information review the rules of FuncEval.
A: Make sure you do not have something like that
[assembly: Debuggable(DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints)]
in your AssemblyInfo
A: You are probably trying to debug your app in release mode instead of debug mode, or you have optimizations turned on in your compile settings.
When the code is compiled with optimizations, certain variables are thrown away once they are no longer used in the function, which is why you are getting that message. In debug mode with optimizations disabled, you shouldn't get that error.
A: Look for a function call with many params and try decreasing the number until debugging returns.
A: Friend of a friend from Microsoft sent this:
http://blogs.msdn.com/rmbyers/archive/2008/08/16/Func_2D00_eval-can-fail-while-stopped-in-a-non_2D00_optimized-managed-method-that-pushes-more-than-256-argument-bytes-.aspx
The most likely problem is that your call stack is getting optimized because your method signature is too large.
A: Had the same problem but was able to resolve it by turning off exception trapping in the debugger. Click [Debug][Exceptions] and set the exceptions to "User-unhandled".
Normally I have this off but it comes in handy occasionally. I just need to remember to turn it off when I'm done.
A: I had this issue when I was using VS 2010. My solution configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
A: In my case I had 2 projects in my solution and was running a project that was not the startup project.
When I changed it to startup project the debugging started to work again.
Hope it helps someone.
A: Assessment:
In .NET, “Function Evaluation (funceval)” is the ability of CLR to inject some arbitrary call while the debuggee is stopped somewhere. Funceval takes charge of the debugger’s chosen thread to execute requested method. Once funceval finishes, it fires a debug event. Technically, CLR have defined ways for debugger to issue a funceval.
CLR allows to initiate funceval only on those threads that are at GC safe point (i.e. when the thread will not block GC) and Funceval Safe (FESafe) point (i.e. where CLR can actually do the hijack for the funceval.) together. Thus, possible scenarios for CLR, a thread must be:
*
*stopped in managed code (and at a GC safe point): This implies that we cannot do a funceval in native code. Since, native code is outside the CLR’s control, it is unable to setup the funceval.
*stopped at a 1st chance or unhandled managed exception (and at a GC safe point): i.e at time of exception, to inspect as much as possible to determine why that exception occurred. (e.g: debugger may try to evaluate and see the Message property on raised exception.)
Overall, common ways to stop in managed code include stopping at a breakpoint, step, Debugger.Break call, intercepting an exception, or at a thread start. This helps in evaluating the method and expressions.
Possible resolutions:
Based on the assessment, if thread is not at a FESafe and GCSafe points, CLR will not be able to hijack the thread to initiate funceval. Generally, following helps to make sure funceval initiates when expected:
Step #1:
Make sure that you are not trying to debug a “Release” build. Release is fully optimized and thus will lead to the error in discussion. By using the Standard toolbar or the Configuration Manager, you can switch between Debug & Release.
Step #2:
If you still get the error, Debug option might be set for optimization. Verify & Uncheck the “Optimize code” property under Project “Properties”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Uncheck the checkbox “Optimize code”
Step #3:
If you still get the error, Debug Info mode might be incorrect. Verify & set it to “full” under “Advanced Build Settings”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Click “Advanced” button
Set “Debug Info” as “full”
Step #4:
If you still face the issue, try the following:
Do a “Clean” & then a “Rebuild” of your solution file
While debugging:
Go to modules window (VS Menu -> Debug -> Windows -> Modules)
Find your assembly in the list of loaded modules.
Check the Path listed against the loaded assembly is what you expect it to be
Check the modified Timestamp of the file to confirm that the assembly was actually rebuilt
Check whether or not the loaded module is optimized or not
Conclusion:
It’s not an error but an information based on certain settings and as designed based on how .NET runtime works.
A: in my case i was in release mode ones i changed to debug it all worked
A: I had a similar issue and it got resolved when I build the solution in Debug Mode and replaced the pdb file in the execution path.
A: I believe that what you are seeing is a result of the optimisations - sometimes a variable will be reused - particularly those that are created on the stack. For example, suppose you have a method that uses two (local) integers. The first integer is declared at the start of the method, and is used solely as a counter for a loop. Your second integer is used after the loop has been completed, and it stores the result of a calculation that is later written out to file. In this case, the optimiser MAY decide to reuse your first integer, saving the code needed for the second integer. When you try to look at the second integer early on, you get the message that you are asking about "Cannot evaluate expression". Though I cannot explain the exact circumstances, it is possible for the optimiser to transfer the value of the second integer into a separate stack item later on, resulting in you then being able to access the value from the debugger.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: Inline style to act as :hover in CSS I know that embedding CSS styles directly into the HTML tags they affect defeats much of the purpose of CSS, but sometimes it's useful for debugging purposes, as in:
<p style="font-size: 24px">asdf</p>
What's the syntax for embedding a rule like:
a:hover {text-decoration: underline;}
into the style attribute of an A tag? It's obviously not this...
<a href="foo" style="text-decoration: underline">bar</a>
...since that would apply all the time, as opposed to just during hover.
A: If you are only debugging, you might use javascript to modify the css:
<a onmouseover="this.style.textDecoration='underline';"
onmouseout="this.style.textDecoration='none';">bar</a>
A: A simple solution:
<a href="#" onmouseover="this.style.color='orange';" onmouseout="this.style.color='';">My Link</a>
Or
<script>
/** Change the style **/
function overStyle(object){
object.style.color = 'orange';
// Change some other properties ...
}
/** Restores the style **/
function outStyle(object){
object.style.color = 'orange';
// Restore the rest ...
}
</script>
<a href="#" onmouseover="overStyle(this)" onmouseout="outStyle(this)">My Link</a>
A: If it's for debugging, just add a css class for hovering (since elements can have more than one class):
a.hovertest:hover
{
text-decoration:underline;
}
<a href="http://example.com" class="foo bar hovertest">blah</a>
A: I'm afraid it can't be done, the pseudo-class selectors can't be set in-line, you'll have to do it on the page or on a stylesheet.
I should mention that technically you should be able to do it according to the CSS spec, but most browsers don't support it
Edit: I just did a quick test with this:
<a href="test.html" style="{color: blue; background: white}
:visited {color: green}
:hover {background: yellow}
:visited:hover {color: purple}">Test</a>
And it doesn't work in IE7, IE8 beta 2, Firefox or Chrome. Can anyone else test in any other browsers?
A: I put together a quick solution for anyone wanting to create hover popups without CSS using the onmouseover and onmouseout behaviors.
http://jsfiddle.net/Lk9w1mkv/
<div style="position:relative;width:100px;background:#ddffdd;overflow:hidden;" onmouseover="this.style.overflow='';" onmouseout="this.style.overflow='hidden';">first hover<div style="width:100px;position:absolute;top:5px;left:110px;background:white;border:1px solid gray;">stuff inside</div></div>
A: If that <p> tag is created from JavaScript, then you do have another option: use JSS to programmatically insert stylesheets into the document head. It does support '&:hover'. https://cssinjs.org/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119"
}
|
Q: What are some non-standard ways to use namespaces? I am interested in unprecedented, cool, and esoteric ways to use namespaces. I know that many advanced developers "hack" namespaces by, for example, using them as references to string constants. In the string constants example, the idea is to implement DRY (DRY = Do Not Repeat Yourself) and you can keep all your strings in one file.
note: I am looking for answers related to "common" languages such as C#, Ruby, Java, etc.
A: One esoteric use I often resort to is when defining enums in C++, especially when there are several types in a cetain context. This enables usage such as Quality::k_high and Importance::k_high in related contexts. Enums also often sport unknown values (usually to represent cases where none have been set), which need to be qualified to disambiguate constants (say, k_qualityNone and k_importanceNone), which is avoided using namespaces.
A definition will thus look like:
namespace Quality {
enum Type { k_high, k_medium, k_low, k_none };
}
and
namespace Importance {
enum Type { k_high, k_medium, k_low, k_none };
}
Functions and methods will then take an argument of type Quality::Type (and Importance::Type), which is rather descriptive and nice. Individual enumeration constants are also qualified similarly as mentioned earlier (Quality::k_low).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: How to differentiate a HTTP Request submitted from a HTML form and a HTTP Request submitted from a client? Is there any way (in Java Servlet) to determine whether a HTTP POST or GET request is a result from a submission from a HTML form or otherwise?
A: You could possibly do it with a hidden form field + a cookie.
What you could do is set up a nonce, and have that as the hidden field of the form. You would then apply that to a cookie that is sent along with the form. The cookie should be linked to the hidden field, and should also contain some kind of nonce. Finally, when the form is submitted, you can check the cookie and hidden field, and see if they are correct. If you want, link it up to the IP address and user agent of the original request for the form. You could even spice all this up with some Javascript. Make the hidden field blank to start with, but then some ajax to request the hidden field nonce from the server.
This won't be perfect, but that should get you 80%-90% of the way there. Someone with decent HTTP skills could still spoof it though.
It raises the question however, why do you want to differentiate the request at that level?
Or are you really just trying to figure out whether or not the user hit the "submit" button? (If that is the case, then the name/value pair of the submit button should be in the request entity/query string... depending on the form method.)
A: I think it is impossible unless the client itself is co-operating (means the client set some header)
A: Not really. You could check the user agent string but that can be set by the caller.
A: Well, you could look at the agent string and see if it was from a browsr, or from your client app (assuming it has its own agent string)
A: If you have control over the client, then you could attach a custom header to identify the sender.
Some javascript libraries already do this when making XmlHTTPRequests, so that you can handle Ajax calls different to standard requests.
Examine the headers of each incoming request to see if there is anything you could use.
A: Use some JavaScript to set some values.
A: You are not clear whether you want to dinstinguish between legitimate diffent access methods, or against forged attacs (ie. robots or hackers that attempts to look like they are ordinary users).
In the first case keprao have some fine advice for inspecting the headers. In the second case there is basically no way to distinguish between the requests, though robots can be hindered by captchas or authentication.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Vista and ProgramData What is the right place to store program data files which are the same for every user but have to be writeable for the program? What would be the equivalent location on MS Windows XP? I have read that C:\ProgramData is not writeable after installation by normal users. Is that true? How can I retrieve that directory programmatically using the Platform SDK?
A: SHGetFolderPath() with CSIDL of CSIDL_COMMON_APPDATA.
Read more at http://msdn.microsoft.com/en-us/library/bb762181(VS.85).aspx
If you need the path in a batch file, you can also use the %ALLUSERSPROFILE% environment variable.
A: There is a great summary of the different options here: http://blogs.msdn.com/cjacks/archive/2008/02/05/where-should-i-write-program-data-instead-of-program-files.aspx
Where Should I Write Program Data
Instead of Program Files?
A common application code update is
this: "my application used to write
files to program files. It felt like
as good a place to put it as any
other. It had my application's name on
it already, and because my users were
admins, it worked fine. But now I see
that this may not be as great a place
to stick things as I once thought,
because with UAC even Administrators
run with standard user-like privileges
most of the time. So, where should I
put my files instead?"
A: Actually SHGetFolderPath is deprecated.
SHGetKnownFolderPath should be used instead.
A: You can use:
CString strPath;
::SHGetSpecialFolderPath(NULL, strPath.GetBuffer(1024), CSIDL_COMMON_APPDATA, FALSE);
A: See Raymond Chen's article on this specific question.
In short you're asking for a security hole.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: How can I implement rate limiting with Apache? (requests per second) What techniques and/or modules are available to implement robust rate limiting (requests|bytes/ip/unit time) in apache?
A: In Apache 2.4, there's a new stock module called mod_ratelimit. For emulating modem speeds, you can use mod_dialup. Though I don't see why you just couldn't use mod_ratelimit for everything.
A: Sadly, mod_evasive won't work as expected when used in non-prefork configurations (recent apache setups are mainly MPM)
A: The best
*
*mod_evasive (Focused more on reducing DoS exposure)
*mod_cband (Best featured for 'normal' bandwidth control)
and the rest
*
*mod_limitipconn
*mod_bw
*mod_bwshare
A: Depends on why you want to rate limit.
If it's to protect against overloading the server, it actually makes sense to put NGINX in front of it, and configure rate limiting there. It makes sense because NGINX uses much less resources, something like a few MB per ten thousand connections. So, if the server is flooded, NGINX will do the rate limiting(using an insignificant amount of resources) and only pass the allowed traffic to Apache.
If all you're after is simplicity, then use something like mod_evasive.
As usual, if it's to protect against DDoS or DoS attacks, use a service like Cloudflare which also has rate limiting.
A: As stated in this blog post it seems possible to use mod_security to implement a rate limit per second.
The configuration is something like this:
SecRuleEngine On
<LocationMatch "^/somepath">
SecAction initcol:ip=%{REMOTE_ADDR},pass,nolog
SecAction "phase:5,deprecatevar:ip.somepathcounter=1/1,pass,nolog"
SecRule IP:SOMEPATHCOUNTER "@gt 60" "phase:2,pause:300,deny,status:509,setenv:RATELIMITED,skip:1,nolog"
SecAction "phase:2,pass,setvar:ip.somepathcounter=+1,nolog"
Header always set Retry-After "10" env=RATELIMITED
</LocationMatch>
ErrorDocument 509 "Rate Limit Exceeded"
A: One more option - mod_qos
Not simple to configure - but powerful.
http://opensource.adnovum.ch/mod_qos/
A: There are numerous way including web application firewalls but the easiest thing to implement if using an Apache mod.
One such mod I like to recommend is mod_qos. It's a free module that is veryf effective against certin DOS, Bruteforce and Slowloris type attacks. This will ease up your server load quite a bit.
It is very powerful.
The current release of the mod_qos module implements control mechanisms to manage:
*
*The maximum number of concurrent requests to a location/resource
(URL) or virtual host.
*Limitation of the bandwidth such as the
maximum allowed number of requests per second to an URL or the maximum/minimum of downloaded kbytes per second.
*Limits the number of request events per second (special request
conditions).
*Limits the number of request events within a defined period of time.
*It can also detect very important persons (VIP) which may access the
web server without or with fewer restrictions.
*Generic request line and header filter to deny unauthorized
operations.
*Request body data limitation and filtering (requires mod_parp).
*Limits the number of request events for individual clients (IP).
*Limitations on the TCP connection level, e.g., the maximum number of
allowed connections from a single IP source address or dynamic
keep-alive control.
*Prefers known IP addresses when server runs out of free TCP
connections.
This is a sample config of what you can use it for. There are hundreds of possible configurations to suit your needs. Visit the site for more info on controls.
Sample configuration:
# minimum request rate (bytes/sec at request reading):
QS_SrvRequestRate 120
# limits the connections for this virtual host:
QS_SrvMaxConn 800
# allows keep-alive support till the server reaches 600 connections:
QS_SrvMaxConnClose 600
# allows max 50 connections from a single ip address:
QS_SrvMaxConnPerIP 50
# disables connection restrictions for certain clients:
QS_SrvMaxConnExcludeIP 172.18.3.32
QS_SrvMaxConnExcludeIP 192.168.10.
http://opensource.adnovum.ch/mod_qos/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
}
|
Q: How can I programmatically manipulate Windows desktop icon locations? Several years back, I innocently tried to write a little app to save my tactically placed desktop icons because I was sick of dragging them back to their locations when some event reset them. I gave up after buring WAY too much time having failed to find a way to query, much less save and reset, my icons' desktop position.
Anyone know where Windows persists this info and if there's an API to set them?
Thanks,
Richard
A: The desktop is just a ListView control and you can get its handle and send messages to it to move icons around using LVM_SETITEMPOSITION.
Getting icon positions using LVMGETITEMPOS is a bit more complicated, though. You have to pass a pointer to a POINT structure as your LPARAM. If you try to do that, you will likely crash Explorer. The problem is you passed it a pointer in your address space, which the control interpreted as a pointer in Explorer's address space. Ouch!
The solution I've used is to inject a DLL into the Explorer process and send the message from there. Then you just have to have a way to get the position info back to your process.
A: If I'm not mistaken the desktop is just a ListView, and you'll have to send the LVM_SETITEMPOSITION message to the handle of the desktop.
I googled a bit for some c# code and couldn't find a example, but I did found the following article. Torry: ...get/set the positions of desktop icons?. It's delphi code, but I find it very readable and with some P/Invokes you'll be able to translate it to c#.
A: I am still looking into this and will post the result once I finally get something working. I'm posting this because, thanks indirectly to Davy's post, I also found a classic VB implementation:
Shuffle Desktop Icons Using Interprocess Memory Communication
and that will probably be the basis for my code.
A: I have no idea about the API, but I know Ultramon (http://www.realtimesoft.com/ultramon/) has a feature included for preserving icon placement (although I've never used it for preserving icon location, it is indispensable for multiple monitor usage). The latest beta release works flawlessly with Vista (except for sometimes having a minor glitch or two when initially logging into my machine via RDP), and of course, haven't had any issues with XP. I've used it for over four years now.
And did I mention that it's the best utility for multiple monitor usage?
A: may be you want this one?I find it in 《WindowsCoreProgramming 5th》 https://github.com/wang1902568721/WindowsCoreProgramming
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Is it possible to make an eclipse p2 provisioning mechanism running *locally*? Eclipse 3.4[.x] - also known as Ganymede - comes with this new mechanism of provisioning called p2.
"Provisioning" is the process allowing to discover and update on demand some parts of an application, as explained in general in this article on the Sun Web site.
Eclipse has an extended wiki section in which p2 details are presented.
Specifically, it says in this wiki page that p2 will look for new components
However after reading it.
I suppose (but you may confirm that point by your own experience), that p2 can function file "file://" protocol, which would allow it to provision with local file (either on your computer or on an UNC path '\server\path'), as illustrated here, but also by the files:
*
*[eclipse-SDK-3.4-win32]\eclipse\configuration\.settings\org.eclipse.equinox.p2.artifact.repository.prefs
*[eclipse-SDK-3.4-win32]\eclipse\configuration\.settings\org.eclipse.equinox.p2.metadata.repository.prefs
p2 mechanism is used to update eclipse itself, through an eclipse 3.4 update site, and reference in those '.prefs' files with line like:
repositories/file:_C:_jv_eclipse_eclipse-SDK-3.4-win32_eclipse/url=file:/C:/jv/eclipse/eclipse-SDK-3.4-win32/eclipse/
Now, how could I replicate the eclipse components present in that update site into a local directory and reference those components through the mentioned '.prefs' files, in order to have an upgrade process entirely run locally, without having to access the web?
I suppose that some p2 metadata files present in the distant 'update site' need to be replicated and changed as well.
Do you have any thoughts/advice/tips on that ? (i.e. on how to discover and retrieve and update the complete structure needed for a full eclipse installation, in order to run that installation locally)
A: Yes, you can specify the repository locations if you use the p2.director
this for example is a snippet of a script that I use to install eclipse (Ganymede) from a local copy of the Ganymede repository
./eclipse\
-nosplash -consolelog -debug\
-vm "${VM}"\
-application org.eclipse.equinox.p2.director.app.application\
-metadataRepository file:${SHARED_REPOSITORY_DIR}\
-artifactRepository file:${SHARED_REPOSITORY_DIR}\
-installIU "${4-org.eclipse.sdk.ide}"\
-destination "${3}"\
-profile "${1}"\
-profileProperties org.eclipse.update.install.features=true\
-bundlepool ${SHARED_BUNDLEPOOL_DIR}\
-p2.os linux\
-p2.ws gtk\
-p2.arch "${2}"\
\
-vmargs\
-Xms64m -Xmx1024m -XX:MaxPermSize=256m\
-Declipse.p2.data.area=${SHARED_P2_DIR}
Here are some links to use the p2 director
http://eclipse.dzone.com/articles/understanding-eclipse-p2-provi
http://wiki.eclipse.org/Equinox_p2_director_application
A: It seems like you need to have one update work via the web which will mirror (download) what you need. But after that it should be able to get the files from the local peer. But I guess that is your question - does it need web access to determine that...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: HttpHeaders in ASP.NET 1.1 How do I read the Response Headers that are being sent down in a page ? I want to be able to read the header values and modify some of them. This is for an ASP.NET 1.1 application but I would be curious to know if it can done in any version of ASP.NET.
The reason for doing this is someone may have added custom headers of their own before the point I am examining the response - so I cannot blindly clear all the headers and append my own - I need to read the all the headers so I can modify the appropriate ones only.
A: HttpContext.Current.Response (Its a HTTPResponse), exposed ClearHeaders(), AddHeaders() and AppendHeaders().
Not as direct as it is now in later version of ASP.NET, but should be enough to let you modify the headers you wanted to modify.
http://msdn.microsoft.com/en-us/library/system.web.httpresponse_members(VS.71).aspx
A: AFAIK it cannot be done in ASP.NET 1.1. There is no way for you to get at the response headers - request headers are available but not response headers.
I am not sure if you can do this in other stacks like Java, LAMP though and I am curious to find out...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Get CSIDL_LOCAL_APPDATA path for any user on Windows Is there any Win32/MFC API to get the CSIDL_LOCAL_APPDATA for any user that I want (not only the currently logged on one)? Let's say I have a list of users in the form "domain\user" and I want to get a list of their paths - is that possible?
A: You can get the SID for the user and then look it up under HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and get the ProfileImagePath value.
Once you have this path, you can get CLSID_LOCAL_APPDATA for your user, convert the absolute path to a relative path to your profile and then append that relative path to the other user profile path.
However, keep in mind that this is relying on an undocumented registry key and can break in future versions of the OS. (Or, as Raymond Chan would say: "Now that you know how to do it, let me tell you why you shouldn't do it this way..." :-))
If you have a token representing the user, you can use the SHGetFolderPath or SHGetKnownFolderPath (on Vista and up). However, there are certain security restrictions and you should read up on MSDN for details.
SHGetFolderPath - http://msdn.microsoft.com/en-us/library/bb762181(VS.85).aspx
SHGetKnownFolderPath - http://msdn.microsoft.com/en-us/library/bb762188(VS.85).aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Common CRUD functions in PHP Is there a simple way to write a common function for each of the CRUD (create, retreive, update, delete) operations in PHP WITHOUT using any framework. For example I wish to have a single create function that takes the table name and field names as parameters and inserts data into a mySQL database. Another requirement is that the function should be able to support joins I.e. it should be able to insert data into multiple tables if required.
I know that these tasks could be done by using a framework but because of various reasons - too lengthy to explain here - I cannot use them.
A: If you try to write such function you'll soon discover that you've just realized yet another framework.
A: Of course not, that's why those frameworks exist and implement crud facilities. I'd first try to convince whomever it takes to actually use an existing framework and second, failing the above, I'd take a look at one or two of them and copy the implementation ideas. Failing all that you could take a look at http://www.phpobjectgenerator.com/
A: Without any frameworks includes without any ORMs? Otherwise I would suggest to have a look at Doctrine or Propel.
A: I know the way you feel.
Pork.DbObject is a simple class that you can extend your objects from. It just needs a db connection class to work.
please check out:
www.schizofreend.nl/pork.dbobject/
(oh yeah, yuk @ php object generator. bloat alert! who wants to have those custom functions in every class???)
A: I came across this question on SO a while back and I ended up not finding anything at that time that did this in a light-weight fashion.
I ended up writing my own and I recently got around to open sourcing it (MIT license) in case others may find it useful. It's up on Github, feel free to check it out and use it if it fits your needs!
https://github.com/ArthurD/php-crud-model-class
Hopefully it will find some use - would love to see some improvements / contributions, too so feel free to submit pull requests! :-)
A: I wrote this very thing, it's kind of a polished scaffold. It's basically a class the constructor of which takes the table to be used, an array containing field names and types, and an action. Based on this action the object calls a method on itself. For example:
This is the array I pass:
$data = array(array('name' => 'id', 'type' => 'hidden')
, array('name' => 'student', 'type' => 'text', 'title' => 'Student'));
Then I call the constructor:
new MyScaffold($table, 'edit', $data, $_GET['id']);
In the above case the constructor calls the 'edit' method which presents a form displaying data from the $table, but only fields I set up in my array. The record it uses is determined by the $_GET method. In this example the 'student' field is presented as a text-box (hence the 'text' type). The 'title' is simply the label used. Being 'hidden' the ID field is not shown for editing but is available to the program for use.
If I had passed 'delete' instead of 'edit' it would delete the record from the GET variable. If I passed only a table name it would default to a list of records with buttons for edit, delete, and new.
It's just one class that contains all the CRUD with lots of customisability. You can make it as complicated or as simple as you wish. By making it a generic class I can drop it in to any project and just pass instructions, table information and configuration information. I might for one table not want to permit new records from being added through the scaffold, in this case I might set "newbutton" to be false in my parameters array.
It's not a framework in the conventional sense. Just a standalone class that handles everything internally. There are some drawbacks to this. The key ones must be that all my tables must have a primary key called 'id', you could get away without this but it would complicate matters. Another being that a large array detailing information about each table to be managed must be prepared, but you need only do this once.
For a tutorial on this idea see here
A: I think you should write your own functions that achieve CRUD unless you are stressed for time. it might be a framework on it's own but you need to learn what the framework does before screaming framework....it also becomes handy to know these things because you can easily pickup bugs on the framework and fix them your self........
A: it is possible but I wouldn't recommend it.
If there's absolutely NO way to use a framework you could create a base class that all other model objects extend. You can then make the base class generate & execute SQL based on get_class() and get_class_vars().
Is it possible? Yes.
Would I recommend it? nope
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do you unit test web page authorization using ASP.NET MVC? Let's say you have a profile page that can only be accessed by the owner of that profile. This profile page is located at:
User/Profile/{userID}
Now, I imagine in order to prevent access to this page by other users, you could structure your UserController class's Profile function to check the current session's identity:
HttpContext.Current.User.Identity.Name
If the id matches the one in the url, then you proceed. Otherwise you redirect to some sort of error page.
My question is how do you unit test something like this? I'm guessing that you need to use some sort of dependency injection instead of the HttpContext in the controller to do the check on, but I am unclear what the best way to do that is. Any advice would be helpful.
A: You can probably do it by using a fake for the controller context. Check out this article: http://stephenwalther.com/blog/archive/2008/07/01/asp-net-mvc-tip-12-faking-the-controller-context.aspx
A: The link above is a good one. I would also add that instead of programmatically checking the User.Identity.Name value, you should use the Authorize attributes as outlined in the article:
http://weblogs.asp.net/scottgu/archive/2008/07/14/asp-net-mvc-preview-4-release-part-1.aspx
A: I ended up going with the "UserNameFilter" shown in Kazi Manzur's blog post. Works like a charm and easy to unit test.
A: This is where mocking comes in, with a fake HttpContext.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do I change the maximum Upload File Size for the Document Manager in a Telerik RAD Controls RADEditor/WYSIWYG control? I'm using the Telerik RAD Controls RADEditor/WYSIWYG control as part of a Dynamic Data solution.
I would like to be able to upload files using the Document Manager of this control.
However, these files are larger than whatever the default setting is for maximum upload file size.
Can anyone point me in the right direction to fix this?
Thanks Yaakov Ellis, see your answer + the answer I linked through a comment for the solution.
A: This thread in combination with Yaakov Ellis's answer may help others.
However, I've found for my problem, putting the following code in the code-behind for the user control FieldTemplate (Dynamic Data) in combination th Yaakov Ellis's answer solved things.
RadEditor1.DocumentManager.MaxUploadFileSize = 4194304;
A: The Telerik website has instructions here.
Short version: in Web.config set the maxRequestLength
<system.web>
<httpRuntime maxRequestLength="102400" executionTimeout= "3600" />
</system.web>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a way to embed a web control in a Windows application that renders using anything but Trident (IE)? I have heard mention that some desktop applications are pretty much just wrappers for websites now.
I have even had the occasional problem that has been best solved this way in the past and I can see it being really useful for current applications development.
However, one problem I always seemed to run into was that the Web Site display controls in Visual Studio use the Trident (Internet Explorer) rendering engine. This tended to be IE6 rendering, but I'm not sure if newer machines in turn use IE7 rendering.
Is there any easy way to use say Gecko (Firefox) or even Webkit (Safari/Chrome) for rendering?
Ultimately I would like to be able to easily plug in this dependency and in turn have a (mostly) compliant framework to develop with and to in turn have consistent rendering for all users.
Please mention if there are any licensing concerns.
Also feel free to ridicule me as I know what I'm asking kind of seems like asking for a "just plug in" internet browser :)
A: I've personal experience with both Trident and Gecko. TomTom HOME 1.x hosts Trident, as an ActiveX control. There have been projects to adapt the COM interfaces to Gecko, but they seemed rather far-fetched. We've tried embedding Gecko, and that wasn't too hard. In the end, we reversed our approach though. TomTom HOME 2.x is a XulRunner application. One problem, don't get tempted to write anything but your UI in Javascript. With XulRunner, you can do most things JS can in C++ too, using XPCOM.
Licensing under MPL is no big deal; your private code is just a "plugin" not subject to the MPL.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: When creating a collection via WebDAV should the name of the collection end with a slash A WebDAV library I'm using is issuing this request
MKCOL /collection HTTP/1.1
To which apache is issuing a 301 because /collection exists
HTTP/1.1 301
Location: /collection/
Rather than a
HTTP/1.1 405 Method Not Allowed
The spec is a bit vague on this (or it could be my reading of it), but when issuing an MKCOL, should the name of your collection always end with a slash (as it is a collection) ?
A: HTTP Code 301 means "Moved Permanently" as you know.
Apache is graciously redirecting you to the proper URL. It can't give you a 405 because no resource exists with the URL you provided. But it can't create the resource with that exact URL either. What it can do is create the resource with the proper URL then redirect you.
But to answer your question, you should end collections with "/" to remove ambiguity, otherwise the resulting URI normalization behavior is up to the server I believe. I don't believe adding that trailing slash is mandated by any RFC.
EDIT:
The MKCOL may succeed without the trailing slash, but notice that the resource reported created has a trailing slash.
The server has an option, according to the RFC. Since it determines the URL normalization procedure as long as it doesn't violate the spec.
The server then can either try to normalize ever URL you send it's way on every operation, returning lots of 3xx codes. This gets expensive. Or it can correct you in the beginning ( POST, MKCOL, etc. ) then fail or redirect after that.
But the key point is that it will always let you know the URL it prefers.
Something on HTTP URL Scheme from RFC 2616
3.2.3 URI Comparison
When comparing two URIs to decide
if they match or not, a client
SHOULD use a case-sensitive
octet-by-octet comparison of the
entire URIs, with these exceptions:
- A port that is empty or not given is equivalent to the default
port for that URI-reference;
- Comparisons of host names MUST be case-insensitive;
- Comparisons of scheme names MUST be case-insensitive;
- An empty abs_path is equivalent to an abs_path of "/".
Characters other than those in the
"reserved" and "unsafe" sets (see
RFC 2396 [42]) are equivalent to their
""%" HEX HEX" encoding.
For example, the following three
URIs are equivalent:
http://abc.com:80/~smith/home.html
http://ABC.com/%7Esmith/home.html
http://ABC.com:/%7esmith/home.html
Notice no mention on how abs_path is defined. Also the server can't strictly speaking ignore your slash either according to the spec. So, issuing a "MKCOL /collection" and getting a regular 2xx created with no new "/collection/" URL is incorrect.
AFAIK, related RFCs that define abs_path do not specify the trailing slash. So it's up to the server on how it compares and normalizes those.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: what's the best implemention of client creatable and modifiable web forms in a relational database? In several web application projects I've been a part of, the client asks to be able to create their own forms. The question arises on how to store their form definitions, and then how to store user inputted values into those custom forms.
I've seen it done two ways:
*
*Assuming that the client only defines how many fields, and what labels are associated with those fields; we can come to a solution involving four tables. FormDefinition, FormFieldDefinition, FormInstances, FormFieldValues. The client makes changes to FormDefinition and FormFieldDefinition, and the web app uses that information to render an HTML web form, on which the website visitor (end user) will submit the form, in which a new row in FormInstances is created and the values are saved in the FormFieldValues table.
Rows in FormDefinition defines the form, i.e. form definition ID = 2, form title = 'Car Registration Form'. Rows in FormFieldDefinition defines fields of a form in FormDefinition, i.e. field definition ID = 7, field label = 'Car Model', field type = 'varchar(50)'. Rows in FormInstance is an instance of each form filled out by a user, i.e. definition id = 2, date_entered = '2008-09-24'. And rows in FormFieldValues are entries by the user, i.e. field definition = 7, value = 'Tiburon'.
Unfortunately, it means the value column in FormFieldValues must be a char type of the largest possible size that your client might specify in a web form... and when form definitions change, managing old data becomes iffy. But user entries are queryable (i wrote a quick query that lists user entries given a form id, which is similar to another pivot question).
*An alternative to using four tables would be to serialize the form definitions and user's form entries into XML (or YAML or something similar) and store that as text. The upside is that the forms are human readable in the database. The downside is that there will be more application overhead with parsing XML, and the database becomes much less queryable from an SQL standpoint.
My real question is, what is this database model called? (So I can google this problem.) But I would settle on an answer to: which is the better implementation or are there better (or just as good) implementations out there?
A: What you're describing is often called "Entity-Attribute-Value," and sometimes described as "mixing data and metadata." That is, the names of attributes (fields) are stored as strings (data).
This leads to a bunch of complex problems like making sure each form instance includes the same set of fields, or making sure mandatory fields are filled out (the equivalent of NOT NULL in a conventional table).
You asked how best to do this with a relational database. In a relational database, you should use metadata for metadata. In this case, it means creating new table(s) for each form, and use columns for form fields. So your form definition is simply the table metadata, and one form instance is one row in that table. If you support forms with multi-valued answers (e.g. checkboxes), you need dependent tables too.
This might seem expensive or hard to scale. Probably true. So a relational database might not be the best tool for this job. You mentioned the possibility of XML or YAML; basically some kind of structured file format that you can define ad hoc. You should define a DTD for each client's form, so each form collected can be validated.
edit: If you really need the flexibility of EAV in your application, that's fine, there are usually circumstances that justify breaking the rules. Just be aware of the extra work it takes, and plan for it in your development schedule and as you scale your server to handle the load. Also see another answer of mine about EAV on StackOverflow.
A:
what is this database model called?
Not sure what to actually call it, but it's the same process used in dynamically generating surveys. If you look for any source for generating survey applications you'll find the same general database schema as well as similar methods.
Also look into form builder sites, like http://wufoo.com/ or http://frevvo.com/ for UI ideas (if you're making it web based).
A: There is a third option, where you create tables and add columns if needed. It depends on how many forms are created, but databases can handle easily a lot of tables. So if a user wants to add a form 'Car Registration Form', you add a table 'CarRegistrationForm'. For every field they want on the form, you can let them choose between some basic types as date, int and text. And when text is chosen, they have to enter a maximum length from a pick list, which gives you info if the field should be a varchar or a clob.
This works on SQL Server, where you can easily add and drop columns. For DB2 it can be a problem, because the drop column is not implemented. For mysql i'm not sure.
You still need to register your forms and the fields on it in two separate tables.
A: You probably want Entity-Relationship Model
There are indeed GUI tools to create schema interactively from such models.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How can I identify references to Java classes using Perl? I'm writing a Perl script and I've come to a point where I need to parse a Java source file line by line checking for references to a fully qualified Java class name. I know the class I'm looking for up front; also the fully qualified name of the source file that is being searched (based on its path).
For example find all valid references to foo.bar.Baz inside the com/bob/is/YourUncle.java file.
At this moment the cases I can think of that it needs to account for are:
*
*The file being parsed is in the same package as the search class.
find foo.bar.Baz references in foo/bar/Boing.java
*It should ignore comments.
// this is a comment saying this method returns a foo.bar.Baz or Baz instance
// it shouldn't count
/* a multiline comment as well
this shouldn't count
if I put foo.bar.Baz or Baz in here either */
*In-line fully qualified references.
foo.bar.Baz fb = new foo.bar.Baz();
*References based off an import statement.
import foo.bar.Baz;
...
Baz b = new Baz();
What would be the most efficient way to do this in Perl 5.8? Some fancy regex perhaps?
open F, $File::Find::name or die;
# these three things are already known
# $classToFind looking for references of this class
# $pkgToFind the package of the class you're finding references of
# $currentPkg package name of the file being parsed
while(<F>){
# ... do work here
}
close F;
# the results are availble here in some form
A: A Regex is probably the best solution for this, although I did find the following module in CPAN that you might be able to use
*
*Java::JVM::Classfile - Parses compiled class files and returns info about them. You would have to compile the files before you could use this.
Also, remember that it can be tricky to catch all possible variants of a multi-line comment with a regex.
A: You also need to skip quoted strings (you can't even skip comments correctly if you don't also deal with quoted strings).
I'd probably write a fairly simple, efficient, and incomplete tokenizer very similar to the one I wrote in node 566467.
Based on that code I'd probably just dig through the non-comment/non-string chunks looking for \bimport\b and \b\Q$toFind\E\b matches. Perhaps similar to:
if( m[
\G
(?:
[^'"/]+
| /(?![/*])
)+
]xgc
) {
my $code = substr( $_, $-[0], $+[0] - $-[0] );
my $imported = 0;
while( $code =~ /\b(import\s+)?\Q$package\E\b/g ) {
if( $1 ) {
... # Found importing of package
while( $code =~ /\b\Q$class\E\b/g ) {
... # Found mention of imported class
}
last;
}
... # Found a package reference
}
} elsif( m[ \G ' (?: [^'\\]+ | \\. )* ' ]xgc
|| m[ \G " (?: [^"\\]+ | \\. )* " ]xgc
) {
# skip quoted strings
} elsif( m[\G//.*]gc ) {
# skip C++ comments
A: This is really just a straight grep for Baz (or for /(foo.bar.| )Baz/ if you're concerned about false positives from some.other.Baz), but ignoring comments, isn't it?
If so, I'd knock together a state engine to track whether you're in a multiline comment or not. The regexes needed aren't anything special. Something along the lines of (untested code):
my $in_comment;
my %matches;
my $line_num = 0;
my $full_target = 'foo.bar.Baz';
my $short_target = (split /\./, $full_target)[-1]; # segment after last . (Baz)
while (my $line = <F>) {
$line_num++;
if ($in_comment) {
next unless $line =~ m|\*/|; # ignore line unless it ends the comment
$line =~ s|.*\*/||; # delete everything prior to end of comment
} elsif ($line =~ m|/\*|) {
if ($line =~ m|\*/|) { # catch /* and */ on same line
$line =~ s|/\*.*\*/||;
} else {
$in_comment = 1;
$line =~ s|/\*.*||; # clear from start of comment to end of line
}
}
$line =~ s/\\\\.*//; # remove single-line comments
$matches{$line_num} = $line if $line =~ /$full_target| $short_target/;
}
for my $key (sort keys %matches) {
print $key, ': ', $matches{$key}, "\n";
}
It's not perfect and the in/out of comment state can be messed up by nested multiline comments or if there are multiple multiline comments on the same line, but that's probably good enough for most real-world cases.
To do it without the state engine, you'd need to slurp into a single string, delete the /.../ comments, and split it back into separate lines, and grep those for non-//-comment hits. But you wouldn't be able to include line numbers in the output that way.
A: This is what I came up with that works for all the different cases I've thrown at it. I'm still a Perl noob and its probably not the fastest thing in the world but it should work for what I need. Thanks for all the answers they helped me look at it in different ways.
my $className = 'Baz';
my $searchPkg = 'foo.bar';
my @potentialRefs, my @confirmedRefs;
my $samePkg = 0;
my $imported = 0;
my $currentPkg = 'com.bob';
$currentPkg =~ s/\//\./g;
if($currentPkg eq $searchPkg){
$samePkg = 1;
}
my $inMultiLineComment = 0;
open F, $_ or die;
my $lineNum = 0;
while(<F>){
$lineNum++;
if($inMultiLineComment){
if(m|^.*?\*/|){
s|^.*?\*/||; #get rid of the closing part of the multiline comment we're in
$inMultiLineComment = 0;
}else{
next;
}
}
if(length($_) > 0){
s|"([^"\\]*(\\.[^"\\]*)*)"||g; #remove strings first since java cannot have multiline string literals
s|/\*.*?\*/||g; #remove any multiline comments that start and end on the same line
s|//.*$||; #remove the // comments from what's left
if (m|/\*.*$|){
$inMultiLineComment = 1 ;#now if you have any occurence of /* then then at least some of the next line is in the multiline comment
s|/\*.*$||g;
}
}else{
next; #no sense continuing to process a blank string
}
if (/^\s*(import )?($searchPkg)?(.*)?\b$className\b/){
if($imported || $samePkg){
push(@confirmedRefs, $lineNum);
}else {
push(@potentialRefs, $lineNum);
}
if($1){
$imported = 1;
} elsif($2){
push(@confirmedRefs, $lineNum);
}
}
}
close F;
if($imported){
push(@confirmedRefs,@potentialRefs);
}
for (@confirmedRefs){
print "$_\n";
}
A: If you are feeling adventurous enough you could have a look at Parse::RecDescent.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: I can't include a version number in this old Delphi project I inherited. How do I fix it? I have an old Delphi codebase I have to maintain, lots of DLLs, some older than others. In some of these DLLs there is no version information in the Project Options dialog. The controls for adding a version are greyed out and I can't even add a version number by manually editing the .DOF file. How can I include a version number in these projects?
A: It seems the resource directive {$R *.RES} is missing (or enclosed in conditional defines) in your .dpr file so that the IDE doesn't find it.
A: You can create and embed resource files in libraries created under Delphi, by using the $R directive.
This link has information relevant to constructing the RES file.
Delphi has its own resource compiler: BRCC32
A: Check if the default .RES file exists in the project source location. Delphi includes the version number of the project in a .res file with the same name as the .dpr file. If the .RES file does not exist, the simplest way to recreate it is to add the {$R *.RES} compiler directive to the .DPR file, immediately after the uses clause.
library foolib;
uses
foo in 'foo.pas',
baz in 'baz.pas';
{$R *.RES}
exports
foofunc name 'foofunc';
end;
As soon as you add the {$R *.RES} compiler directive Delphi will tell you it has recreated the foolib.res resource file.
A: I use a build control system (FinalBuilder) and that is able to add version resources to all my DLLs and EXEs that are all coherent. Therefore I can be confident that the file set is all labelled with the same build. There are some Delphi projects that don't have versions by default, and FB will add them for you anyway.
A: Inclusion of version info in dll's is a bit erratic. If you specify a lib_suffix the version info is not updated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Simple serial AVR programmer for beginner What is the cheap and good way to make a serial (RS232) programmer for AVR atMega and atTiny? There are several circuits in the Internet, but which one is better?
I'd like to be able to program my controller from Linux and Windows using some standard tools.
A: Try the Serial port AVR programmer (DASA) Kit from Adafruit Industries. It's only $7.50, is very popular with the Arduino community, and she provides step by step instructions for assembly on her personal site.
If you don't want to build it yourself, Sparkfun Electronics has several serial programmers available for a bit more money.
A: I've previously had good success using the (free!) Pony Programmer software and dongle.
They provide schematics for the hardware, which was simple and seemed to do the trick.
Haven't used the linux version of the software for some time but the windows version seemed to do everything that it needed to.
A: If usb can be used, I really don't think the original programmer (AVR ISP mkII) is that expensive, the pricing today was about 34$.
A: http://onlinetps.com/shop/index.php?main_page=product_info&cPath=65&products_id=188
This Serial Port AVR Programmer is a serial port dongle compatible with PonyProg and other programming software. It does not require any external power supply; it takes power from your target board. The dongle attaches to your PC via a standard DB9 serial port
Information:
This Serial Port AVR Programmer Programmer is most inexpensive AVR programmer on the market. It works with the great free AVR programming software - the Pony -Prog you can always look at the list of the supported devices on this link as it grows every month.
Supported Device
ATmega103, ATmega161, ATmega163, ATmega 323, ATmega128, ATmega8, ATmega16, ATmega8L, ATmega16L, ATmega64, ATmega32,ATmega64L, ATmega32L, ATmega162, ATmega169, ATmega8515, ATmega8535, ATtiny12, ATtiny15, ATtiny26, ATtiny2313,ATtiny13, 25, 45, 85, 261, 461
And lots more..........
A: You can use bitbang programming. All that you need is ftdi ft232 chip and avr-dude software.
http://easyelectronics.ru/img/readydev/FTDIBBProg/ftbb.JPG
http://www.nongnu.org/avrdude/
A: The Bus Pirate is one more USB option. It is actually a handy tool for a lot of (serial bus related) uses. It may not be as smooth or fast as dedicated programmers, though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: unsigned int vs. size_t I notice that modern C and C++ code seems to use size_t instead of int/unsigned int pretty much everywhere - from parameters for C string functions to the STL. I am curious as to the reason for this and the benefits it brings.
A: The size_t type is the type returned by the sizeof operator. It is an unsigned integer capable of expressing the size in bytes of any memory range supported on the host machine. It is (typically) related to ptrdiff_t in that ptrdiff_t is a signed integer value such that sizeof(ptrdiff_t) and sizeof(size_t) are equal.
When writing C code you should always use size_t whenever dealing with memory ranges.
The int type on the other hand is basically defined as the size of the (signed) integer value that the host machine can use to most efficiently perform integer arithmetic. For example, on many older PC type computers the value sizeof(size_t) would be 4 (bytes) but sizeof(int) would be 2 (byte). 16 bit arithmetic was faster than 32 bit arithmetic, though the CPU could handle a (logical) memory space of up to 4 GiB.
Use the int type only when you care about efficiency as its actual precision depends strongly on both compiler options and machine architecture. In particular the C standard specifies the following invariants: sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) placing no other limitations on the actual representation of the precision available to the programmer for each of these primitive types.
Note: This is NOT the same as in Java (which actually specifies the bit precision for each of the types 'char', 'byte', 'short', 'int' and 'long').
A: The size_t type is the unsigned integer type that is the result of the sizeof operator (and the offsetof operator), so it is guaranteed to be big enough to contain the size of the biggest object your system can handle (e.g., a static array of 8Gb).
The size_t type may be bigger than, equal to, or smaller than an unsigned int, and your compiler might make assumptions about it for optimization.
You may find more precise information in the C99 standard, section 7.17, a draft of which is available on the Internet in pdf format, or in the C11 standard, section 7.19, also available as a pdf draft.
A: This excerpt from the glibc manual 0.02 may also be relevant when researching the topic:
There is a potential problem with the size_t type and versions of GCC prior to release 2.4. ANSI C requires that size_t always be an unsigned type. For compatibility with existing systems' header files, GCC defines size_t in stddef.h' to be whatever type the system'ssys/types.h' defines it to be. Most Unix systems that define size_t in `sys/types.h', define it to be a signed type. Some code in the library depends on size_t being an unsigned type, and will not work correctly if it is signed.
The GNU C library code which expects size_t to be unsigned is correct. The definition of size_t as a signed type is incorrect. We plan that in version 2.4, GCC will always define size_t as an unsigned type, and the fixincludes' script will massage the system'ssys/types.h' so as not to conflict with this.
In the meantime, we work around this problem by telling GCC explicitly to use an unsigned type for size_t when compiling the GNU C library. `configure' will automatically detect what type GCC uses for size_t arrange to override it if necessary.
A: If my compiler is set to 32 bit, size_t is nothing other than a typedef for unsigned int. If my compiler is set to 64 bit, size_t is nothing other than a typedef for unsigned long long.
A: Type size_t must be big enough to store the size of any possible object. Unsigned int doesn't have to satisfy that condition.
For example in 64 bit systems int and unsigned int may be 32 bit wide, but size_t must be big enough to store numbers bigger than 4G
A: Classic C (the early dialect of C described by Brian Kernighan and Dennis Ritchie in The C Programming Language, Prentice-Hall, 1978) didn't provide size_t. The C standards committee introduced size_t to eliminate a portability problem
Explained in detail at embedded.com (with a very good example)
A: In short, size_t is never negative, and it maximizes performance because it's typedef'd to be the unsigned integer type that's big enough -- but not too big -- to represent the size of the largest possible object on the target platform.
Sizes should never be negative, and indeed size_t is an unsigned type. Also, because size_t is unsigned, you can store numbers that are roughly twice as big as in the corresponding signed type, because we can use the sign bit to represent magnitude, like all the other bits in the unsigned integer. When we gain one more bit, we are multiplying the range of numbers we can represents by a factor of about two.
So, you ask, why not just use an unsigned int? It may not be able to hold big enough numbers. In an implementation where unsigned int is 32 bits, the biggest number it can represent is 4294967295. Some processors, such as the IP16L32, can copy objects larger than 4294967295 bytes.
So, you ask, why not use an unsigned long int? It exacts a performance toll on some platforms. Standard C requires that a long occupy at least 32 bits. An IP16L32 platform implements each 32-bit long as a pair of 16-bit words. Almost all 32-bit operators on these platforms require two instructions, if not more, because they work with the 32 bits in two 16-bit chunks. For example, moving a 32-bit long usually requires two machine instructions -- one to move each 16-bit chunk.
Using size_t avoids this performance toll. According to this fantastic article, "Type size_t is a typedef that's an alias for some unsigned integer type, typically unsigned int or unsigned long, but possibly even unsigned long long. Each Standard C implementation is supposed to choose the unsigned integer that's big enough--but no bigger than needed--to represent the size of the largest possible object on the target platform."
A: size_t is the size of a pointer.
So in 32 bits or the common ILP32 (integer, long, pointer) model size_t is 32 bits.
and in 64 bits or the common LP64 (long, pointer) model size_t is 64 bits (integers are still 32 bits).
There are other models but these are the ones that g++ use (at least by default)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "560"
}
|
Q: SQL Command for copying table What is the SQL command to copy a table from one database to another database?
I am using MySQL and I have two databases x and y. Suppose I have a table in x called a and I need to copy that table to y database.
Sorry if the question is too novice.
Thanks.
A: If the target table doesn't exist....
CREATE TABLE dest_table AS (SELECT * FROM source_table);
If the target table does exist
INSERT INTO dest_table (SELECT * FROM source_table);
Caveat: Only tested in Oracle
A: If your two database are separated, the simplest thing to do would be to create a dump of your table and to load it into the second database. Refer to your database manual to see how a dump can be performed.
Otherwise you can use the following syntax (for MySQL)
INSERT INTO database_b.table (SELECT * FROM database_a.table)
A: Since your scenario involves two different databases, the correct query should be...
INSERT INTO Y..dest_table (SELECT * FROM source_table);
Query assumes, you are running it using X database.
A: If you just want to copy the contents, you might be looking for select into:
http://www.w3schools.com/Sql/sql_select_into.asp. This will not create an identical copy though, it will just copy every row from one table to another.
A: At the command line
mysqldump somedb sometable -u user -p | mysql otherdb -u user -p
then type both passwords.
This works even if they are on different hosts (just add the -h parameter as usual), which you can't do with insert select.
Be careful not to accidentally pipe into the wrong db or you will end up dropping the sometable table in that db! (The dump will start with 'drop table sometable').
A: insert blah from select suggested by others is good for copying the data under mysql.
If you want to copy the table structure you might want to use the show create table Tablename; statement.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Mocking classes that aren't interfaces I've been writing some providers in c# that inherit from the providerbase class. I've found that it's hard to write tests that use the providers as most mocking frameworks will only allow you to mock an interface.
Is there any way to mock a call to a provider that inherits from providerbase?
If not, is there a pattern that I can use to implement mocking of providers?
A: Mocking frameworks should be able to create for you a mock object based on a class, as long as it's got virtual members.
You may also want to take a look at Typemock
A: I know Rhino mocks can mock classes too, most other mocking frameworks should have no problems with this either.
Things too keep in mind: The class can't be sealed. You need to mark methods you want to mock virtual and the class needs a constructor with no arguments this can be protected, private won't work. (just tried this out)
Keep in mind that the mocking framework will just create a class that inherits from your class and creates an object of that type. So constructors will get called. This can cause unexpected behaviour in your tests.
A: RhinoMocks or Moq will create test doubles for classes as well as for interfaces. The type has to have virtual methods or be abstract though. The Typemock isolator gets around this.
I'd suggest that the objects you want to mock probably should be abstract (dependency inversion principle).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/131806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.