text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: IMG SRC tags and JavaScript Is it possible to call a JavaScript function from the IMG SRC tag to get an image url?
Like this:
<IMG SRC="GetImage()" />
<script language="javascript">
function GetImage() {return "imageName/imagePath.jpg"}
</script>
This is using .NET 2.0.
A:
Is it possible to call a JavaScript function from the IMG SRC tag to get an image url?
Do you mean doing something like the following?
<img src="javascript:GetImage()" />
Unfortunately, no - you can't do that. However, you can do the following hack:
function getImageUrl(img) {
var imageSrc = "imageName/imagePath.jpg";
if(img.src != imageSrc) { // don't get stuck in an endless loop
img.src = imageSrc;
}
}
Then, have the following html:
<img src="http://yourdomain.com/images/empty.gif" onload="getImageUrl(this)" />
The onload event will only fire if you have an actual image set to the src attribute - if you don't set that attribute or set it to an empty string or something similar, you will get no love. Set it to a single pixel transparent gif or something similar.
Anyway, this hack works, but depending on what you are really trying to accomplish, this may not be the best solution. I don't know why you would want to do this, but maybe you have a good reason (that you would like to share with us!).
A: Nope. It's not possible, at least not in all browsers. You can do something like this instead:
<img src="blank.png" id="image" alt="just nothing">
<script type="text/javascript">
document.getElementById('image').src = "yourpicture.png";
</script>
Your favourite JavaScript framework will provide nicer ways :)
A: If you're in the mood for hacks, this works as well.
<img src='blah' onerror="this.src='url to image'">
A: You cannot do it inline the image @src, but you should be able to call it from an inline script block immediately following your image:
<img src="" id="myImage"/>
<script type="text/javascript">
document.getElementById("myImage").src = GetImage();
</script>
A: you could dynamically feed the image by calling an aspx page in the SRC.
Ex;
<img src="provideImage.aspx?someparameter=x" />
On the page side, you`ll need to put the image in the response and change the content type for an image.
The only "problem" is that your images won't be indexed a you better put some cache on that provider page or you'll ravage the server.
A: Are you looking for this.src ?`
<img src='images/test.jpg' onmouseover="alert(this.src);">
A: Since you're using .NET, you could add the runat="server" attribute and set the src in your codebehind.
A: You might be able to do it on the server side. Alternately you could attach an onload event to swap the image src out. I guess the question then becomes, why would you have to use Javascript in the first pace?
A: I've had to do something like this before, and IIRC the trick winds up being that you can't change an src attribute of an image that's part of the DOM tree. So your best bet is to write your HTML skeleton without the image and 1)create an onLoad function that generates a new img element with document.createElement, 2) set the src attribute with setAttribute(), and 3) attach it to your DOM tree.
A: OnLoad event of image called again and again do some thing like this
A: how about this?
var imgsBlocks = new Array( '/1.png', '/2.png', '/3.png');
function getImageUrl(elemid) {
var ind = document.getElementById(elemid).selectedIndex;
document.getElementById("get_img").src=imgsBlocks[ind];
}
it's not work?
<img src="'+imgsBlocks[2]+'" id="get_img"/>
A: You may try this way also
const myImage = new Image(200, 200);
myImage.src = 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAoHCBEPEQ8PDxEPEQ8PEQ8PDw8RDxEPDw8PGBQZGRkUGBgcIS4lHB4rHxgYJzsmKzMxNzc1GiRIRTszPzA0QzEBDAwMEA8QGRISHjEhISE2MTQ0NDQxMTE0MTExMTQ0NDQ0NDExMTE0NDE0MTExMTQ0NDQ0MTQxMTExPzExNDQxMf/AABEIAM4A9QMBIgACEQEDEQH/xAAbAAEAAgMBAQAAAAAAAAAAAAAAAQcCBQYEA//EAEgQAAIBAgIDCwgIBAMJAAAAAAABAgMEERIFBiEHExYxNFR0krKz0jIzNVFhcnOTIkFCYnGRodEVgZTBI1LwFBckJVOio7HC/8QAGQEBAAMBAQAAAAAAAAAAAAAAAAEDBAUC/8QAKxEBAAIABAUDAwUBAAAAAAAAAAECAxExUQQTFDKBEiEzQXGxImGh0fAj/9oADAMBAAIRAxEAPwC5gAAAAAgwqVFFOUmlFJtybwSS42yvNOa+1HKULKMVCLw36azOXtjHiS9rx/ke8PDtiTlV4veK6rGBTj1v0jziXy6fhIet2kecy6lPwmjo8TeFfPquQFNcLtI85n1KfhIet+kecy6lPwkdHibwnn1XMCmeF+kecz6lLwmPDDSPOZ9Wn4R0eJvBz6roBS/DDSPOZ9Sn4SOGGkecz6lPwjpL7x/vBz6rpBSz1w0jzmfVp+EjhhpLnMurT8I6S+8HPquoFK8MdJc5n1KXhJ4YaR51PqU/COkv+xz67SukFLcMNI85n1KfhHDDSPOZ9Sn4R0l/2OfXaV0gpbhhpHnU+pT8I4X6R5zLq0/CR0l94OfVdIKX4X6S5zPqU/CFrfpHnM+pT8I6S/7HPrtK6CSmaWuekIvNv+f7s4U3F/jgkdxqrrjC9kqFaKpXGH0UsclXDjy48T9j/U8X4e9Izeq4tbTk64AFKwAAAAAAAAAAHH7o99KnaxpReDuJ5Ze2EVma/m8EVeiw91LyLP363ZiV6dXhIyw48seNP60MgkM0qWGBDMmYshIQySGBBBLIIkQyDIggAMQglBJAxIGSRkomOJKkBLIJxIJEmVGrKnOE4PLOEozjJbGpJ4pmJDIF96Nut/oUK3FvtOnUw9WaKeH6nrNTqtyCx6NQ7CNscafaZhvjQABCQAAAAAAAHBbqXkWfv1ezErwsPdS8iz9+r2YldnW4T4o8/lixu+QMhkmhUgxZkyGBABBCUMgkggCMQCAxIxIbLD1Q1IjKMbm+jmzJTp27xSS405+v3fzK8TErhxnL3Wk20cZo3Q9zdvChQqTX1zy5aa/Gb2fyxOltdzm7mk6tahS9iUqsl+WC/UtGnTjCKjFKMUsFGKSSXsSMzDbirzp7NEYNfqr+nuZw+3dzb+5RhD/22fX/AHa2/Objq0/2O7JK+fibvXKpsq7WPUinZWtW5jXqTdPJhCUYJPNUjHjXvHEplxboPo25/Gh30CnEbeGva9Zm3v7s+LWKzlDMxZkYs0K15ar8gsejUOwjamq1X5BY9Ft+wjanGtrLfGkAAISAAAAAAAA4LdT83Z/Eq9lFdFi7qfm7P36vZRXZ1+E+KPP5YsbvlDBANGapDDJMWEoIMzFkCCCSCBAYDZA6bULQivLnPUWNG2yVJp8UqmP0I+3am2vZ7S4kczqBYKhYUZYYSr415P62peT/ANuB0xyce/rvO0NuHXKqQCClYkHNaS11sLaThKpKrUWyUaMd8wfqctkf1NTLdLtvs29y1626Uf8A6LIwcSfeIeJxKx9W23QvRtz+NDvoFNxO61k12oX1pVtoUq8KlR03FzUMiyzjJ4tSx4ov6jhkb+GpatJi0Ze7Pi2ibewGTiQ2aFS89WOQWPRbfsI2pq9WeQ2PRbfu4m0OLbWW+ukAAISAAAAAAAA4HdT83Z/Eq9hFeFibqfm7P4lXsIrs6/CfFHn8sWN3yxIZkQaFSCGZEMhIQSYgRgRgZmBACEM8owXHOUYL8W8P7ks9Wh4Z7q1i/tXFBf8AkiebTlGaYXtbUVTp04LYoQjFL1JJL+x9wDiOgHA7penJ0YQs6UnCVaLlWktj3riUU/qxeOPsXtO+Kf3SZ5tIyX+ShRj+eaX9y/h6xbEjNXizlVyijgsFgl6icCQdRjMCUQSgAYDAvTVnkNj0W37uJtDV6tchsei2/YibQ4ttZb66QAAhIAAAAAAADgd1Pzdn8Sr2EV2WLup+bs/iVewiuzrcJ8UeWLG75DFmRjgaZVBDJIZCQhkkEAYtmR2W5paUq1W5VanTqKNOm4qpCM1F5ntWZbCvFv6KzbZ6rX1Tk4py/wBYnv1ff/GWfSbftouj+C2nNLX+npfsTDQ1rFxlC2toyi1KMo0KcZRkuJppbGY54yJjL0/yujAndsAAYmkKZ3Q3/wAyr+yFDsIuY8NfRdtVk51LehObwTnOjCcmlxYtrEtwcTl2zyzeL19UZKDxGJfP8Ds+aWv9PS/YfwSz5pa/09L9jT1kbKeRO6hsTJFra9aMtqWj7idO3t4Ti6OE4UYQksasE8GlitjZVKNOFicyueiu9PTOQSyCWWPC89WuQ2PRbfu4m0NXq1yGx6Lb92jaHFnWW+NIAAQkAAAAAAABwO6n5uz+JV7CK8LE3U/N2fv1eyiuzr8J8UefyxY3fIQSyGaVSCGAyEhDJBAxZ3W5T567+HT7cjhjudyrz138Kn25Gfifit/vqswu+FmAgHJbUgAAAQBIIAHNboXoy596h30CnUXFuh+jbj3rfvoFOnR4P45+7Lj9zIhkkM1KV56tchsei2/do2hq9WuQ2PRbfu0bQ4s6y310gABCQAAAAAAAHA7qXm7T36vZRXhYe6n5u09+r2UV4dfhPijz+WLG75CCSDRKpAZJDISgAMCD16M0tcWcpStqjpymlGTUKc8yTxS+kn6zyMhnmYiYylMezfcNNJc6l8mh4D16H1t0hUuranO4coVK9KE471RWaMppNYqGK2HKnu0Byyz6TQ7cSm+FT0z7R9fo9Re2ce6+QAcluCsNcdZr62vq1ChXcKUY0nGO90pYOUE3tlFvjLPKZ3QPSVx7tDu4mjhqxa+U+/sqxpmK+zDhnpLnT+TQ8A4Z6S5zL5NDwGgB0OVTaGb1W3be/wBZb25pyo167nTnlcob3SjjlkpLbGKfGkalAHqtYr7Q8zMzqkhmRiyRemrfIrLo1v3cTZms1c5FZdGt+7ibM4s6y3xpAACEgAAAAAAAOC3U/N2fv1eyivCw91Lzdn8Sr2EV2dfhPijz+WLG75CCTE0KkkMkghIQwQwIYDMSBke3QHLLPpNv24nhPdoDlln0m37cTxftn7S9RrC+QAcVvCmt0D0lX92h3cS5Smd0D0lce7Q7uJp4T5PCnH7XOgkg6TKkAASQySGBeurnIrLotv3cTZGt1c5FZdFt+7ibI4s6y310gABCQAAAAAAAHB7qXm7T4lTsortFibqXm7T4lTsoro6/CfFHn8sWN3ykxJBpVIIZJDISEAhkCTEMxIA9+gOWWfSaHbieA9+gOWWfSaHbieL9s/aXqNYXyADit4Uzug+krj3aHdxLmKZ3QfSVz7tDuomnhPk8Kcftc8AQdJkSAAMjFmZgwleurvIrLotv3cTZGs1d5FZdFt+7ibM4s6y310gABCQAAAAAAAHB7qXm7T4lTsIrss7dKs5VLWFWKx3iopT9kJLK3+eUrDE63Bz/AMo8sWNH60kEkM1KkGLMiGQlBizIxZAkwMzAgD36A5ZZ9Jt+3E8LPdoDlln0m37cTxftn7S9RrC+QAcVvCmd0H0lc+7Q7qJcxTO6D6Sufdod1E08J8nj+lOP2ueIAOkypAAQyMWQZ06cpyjCCcpzkoQS+uTeCX5hK8tXeRWXRbfu4mzPLo633mjRo/8ASp06f45Ypf2PUcWdXQjQABAAAADCE1JKUWnFpNNPFNPiaZmAAAHxrUozjKE0pQmnGUWsVKLWDTK105qJXpylOzwq0Xi1TcsKsPu7dkl7ccfxLQILMPFthznV4vSLaqRer96nh/slxs+42Rwfvea3Hy2XfgDT1t9oV9PXdR/B+95pcfLYer99zS4+Wy8BgOtvtH8/2ciN5UfwevuaXHy2Y8Hr7mlx1GXlgMCOtvtByI3lRnB2+5pc/LY4O33NLjqMvMkdZfaP5ORG6i+Dt9zS46jPZoXQN7C6tZzta8YQuKMpycGlGKmm2y5xgeZ4u0xllBGBWPqkAGVeFT67aGuq1/XqUretUhKNFRnCDcXhTing/wAUWwRgWYeJOHOcPN6+qMlF8Hb7mlx1GODt9zW4+Wy9MBgX9ZbaFXIjdRnB6+5pcfLY4O33NLn5bLzGA6u20HIjdSNDVi/m1GNpWXtnFQiv5yZ3WqOpqtJK4uXGdwvIjHFwo4ra8X5UtvH9R2oPF+IveMtHquFWs5oRIBnWgAAAADXav8js+i2/dxNia7V/kdn0W37uJsQAAAAAAAAPLeXkKEYyqNpSlGEcsJzk5y4koxTZ4rTTVOpJwnjCpvtSko5ZuMstScE1LLg/J24cTeDPbe2kLinKnUzZJbJKMnHMuLB4fUeaehqLy7Jpwc5QlGpNSi51d8bTx/zfpsA8tPWSjOUYxjWcZ1Y0Yz3mooPNQ35Sxy7Fl/fiM6WslpUjmhWzxSm2406klGMYxlKTwjsWWcXi9m0+tLQdvFRUYzSjKnJLfJtKUaW9J8f1weV+tH0o6LpU1HB1PowlTg3VnKUaclBNJt7PIj+QHyenKKqSg1VWFOhVUt4q/T3yU4xillxcsY8X7M+dzrBQjFVIzUoKdGFSbU4U4Kc4xeMmsFJKSeV7fwPotC0FhgpppQimqk01knKpF8fGnKW37zXEI6Bt1FwyyyYwnKm5ycJTi01NpvbJ5Vi3xgemrf04qnmzp1VjBb3NywwTbcUsYpYrFvDA8stYLVLM5ywwzeZreRlzZ/J8nBN5uLZxn3ejabVNN1G6Kyxlvk82VxScXLHFppLHHjaPjHQdulJZZtOnKjtqTbVJxcci27Ek3h6sWB6aOkKU4xmppKdSVKKlF05OpFyTjlkk1LGMtj9R4rvTcaU60JRj/hOEVmqZZScsn0sGsFBZ1jLHZg9hs7e3jSUlFYKU51Htb+nOTk3+bZ8Kuj6cpynJ1HKUJwX+JPCEZJKWVY/RbwW1AeCGnc6ouNGco11UUMtSDc5wU24x+px+j5WK8qP1Y4IafTbUqUkqc4wryzxcaeaq6cXF/aWZPF7MMGeuroejPDzkUoqMIwqzhGGEXBSST2SUW1ij5LQdvhFSUpQglDJKTcJwi8YwlHilFPFpP1sD4z0+lGdSNGbhTqqm5OUY4xcacoyS+8qiwTw4ni0Zz0xLJXnGEGrdzzRlWlCpki5LM45HxuOxbcTPg/arOoUlTjUljUhSbpQqfRjHLKMcE44QWz8fWz7vRlPFylnlKUqbbnOU/Jk5Rjt+ypPHAD1UZSlGLlHLJxi5RxTcZNbY4rjwPsAAAAAAAAAAAAH/2Q==';
document.body.appendChild(myImage);
A: No. The Img's SRC attribute is not an event, therefore the inline JS will never fire.
A: OnLoad event of image called again and again do some thing like this
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: In SQL Server 2005, can I do a cascade delete without setting the property on my tables? I have a database full of customer data. It's so big that it's really cumbersome to operate on, and I'd rather just slim it down to 10% of the customers, which is plenty for development. I have an awful lot of tables and I don't want to alter them all with "ON DELETE CASCADE", especially because this is a one-time deal.
Can I do a delete operation that cascades through all my tables without setting them up first? If not, what is my best option?
A: Here's a version of the accepted answer optimised for sparsely populated data models. It checks for the existence of data in a FK chain before adding it to the deletion list. I use it to clean up test data.
Don't use it in an active transactional db- it will hold locks way too long.
/*
-- ============================================================================
-- Purpose: Performs a cascading hard-delete.
-- Not for use on an active transactional database- it holds locks for too long.
-- (http://stackoverflow.com/questions/116968/in-sql-server-2005-can-i-do-a-cascade-delete-without-setting-the-property-on-my)
-- eg:
exec dbo.hp_Common_Delete 'tblConsumer', 'Surname = ''TestDxOverdueOneReviewWm''', 1
-- ============================================================================
*/
create proc [dbo].[hp_Common_Delete]
(
@TableName sysname,
@Where nvarchar(4000), -- Shouldn't include 'where' keyword, e.g. Surname = 'smith', NOT where Surname = 'smith'
@IsDebug bit = 0
)
as
set nocount on
begin try
-- Prepare tables to store deletion criteria.
-- #tmp_to_delete stores criteria that is tested for results before being added to #to_delete
create table #to_delete
(
id int identity(1, 1) primary key not null,
criteria nvarchar(4000) not null,
table_name sysname not null,
processed bit not null default(0)
)
create table #tmp_to_delete
(
id int primary key identity(1,1),
criteria nvarchar(4000) not null,
table_name sysname not null
)
-- Open a transaction (it'll be a long one- don't use this on production!)
-- We need a transaction around criteria generation because we only
-- retain criteria that has rows in the db, and we don't want that to change under us.
begin tran
-- If the top-level table meets the deletion criteria, add it
declare @Sql nvarchar(4000)
set @Sql = 'if exists(select top(1) * from ' + @TableName + ' where ' + @Where + ')
insert #to_delete (criteria, table_name) values (''' + replace(@Where, '''', '''''') + ''', ''' + @TableName + ''')'
exec (@Sql)
-- Loop over deletion table, walking foreign keys to generate delete targets
declare @id int, @tmp_id int, @criteria nvarchar(4000), @new_criteria nvarchar(4000), @table_name sysname, @new_table_name sysname
while exists(select 1 from #to_delete where processed = 0)
begin
-- Grab table/criteria to work on
select top(1) @id = id,
@criteria = criteria,
@table_name = table_name
from #to_delete
where processed = 0
order by id desc
-- Insert all immediate child tables into a temp table for processing
insert #tmp_to_delete
select referencing_column.name + ' in (select [' + referenced_column.name + '] from [' + @table_name +'] where ' + @criteria + ')',
referencing_table.name
from sys.foreign_key_columns fk
inner join sys.columns referencing_column on fk.parent_object_id = referencing_column.object_id
and fk.parent_column_id = referencing_column.column_id
inner join sys.columns referenced_column on fk.referenced_object_id = referenced_column.object_id
and fk.referenced_column_id = referenced_column.column_id
inner join sys.objects referencing_table on fk.parent_object_id = referencing_table.object_id
inner join sys.objects referenced_table on fk.referenced_object_id = referenced_table.object_id
inner join sys.objects constraint_object on fk.constraint_object_id = constraint_object.object_id
where referenced_table.name = @table_name
and referencing_table.name != referenced_table.name
-- Loop on child table criteria, and insert them into delete table if they have records in the db
select @tmp_id = max(id) from #tmp_to_delete
while (@tmp_id >= 1)
begin
select @new_criteria = criteria, @new_table_name = table_name from #tmp_to_delete where id = @tmp_id
set @Sql = 'if exists(select top(1) * from ' + @new_table_name + ' where ' + @new_criteria + ')
insert #to_delete (criteria, table_name) values (''' + replace(@new_criteria, '''', '''''') + ''', ''' + @new_table_name + ''')'
exec (@Sql)
set @tmp_id = @tmp_id - 1
end
truncate table #tmp_to_delete
-- Move to next record
update #to_delete
set processed = 1
where id = @id
end
-- We have a list of all tables requiring deletion. Actually delete now.
select @id = max(id) from #to_delete
while (@id >= 1)
begin
select @criteria = criteria, @table_name = table_name from #to_delete where id = @id
set @Sql = 'delete from [' + @table_name + '] where ' + @criteria
if (@IsDebug = 1) print @Sql
exec (@Sql)
-- Next record
set @id = @id - 1
end
commit
end try
begin catch
-- Any error results in a rollback of the entire job
if (@@trancount > 0) rollback
declare @message nvarchar(2047), @errorProcedure nvarchar(126), @errorMessage nvarchar(2048), @errorNumber int, @errorSeverity int, @errorState int, @errorLine int
select @errorProcedure = isnull(error_procedure(), N'hp_Common_Delete'),
@errorMessage = isnull(error_message(), N'hp_Common_Delete unable to determine error message'),
@errorNumber = error_number(), @errorSeverity = error_severity(), @errorState = error_state(), @errorLine = error_line()
-- Prepare error information as it would be output in SQL Mgt Studio
declare @event nvarchar(2047)
select @event = 'Msg ' + isnull(cast(@errorNumber as varchar), 'null') +
', Level ' + isnull(cast(@errorSeverity as varchar), 'null') +
', State ' + isnull(cast(@errorState as varchar), 'null') +
', Procedure ' + isnull(@errorProcedure, 'null') +
', Line ' + isnull(cast(@errorLine as varchar), 'null') +
': ' + isnull(@errorMessage, '@ErrorMessage null')
print @event
-- Re-raise error to ensure admin/job runners understand there was a failure
raiserror(@errorMessage, @errorSeverity, @errorState)
end catch
A: Unless you want to maintain all related queries as proposed by Chris, the ON DELETE CASCADE is by far the quickest and the most direct solution. And if you don't want it to be permanent, why don't you have some T-SQL code that will switch this option on and off like here
*
*remove the original Tbl_A_MyFK constraint (without the ON DELETE CASCADE)
ALTER TABLE Tbl_A DROP CONSTRAINT Tbl_A_MyFK
*set the constraint Tbl_A_MyFK with the ON DELETE CASCADE
ALTER TABLE Tbl_A ADD CONSTRAINT Tbl_A_MyFK FOREIGN KEY (MyFK) REFERENCES Tbl_B(Column) ON DELETE CASCADE
*Here you can do your delete
DELETE FROM Tbl_A WHERE ...
*drop your constraint Tbl_A_MyFK
ALTER TABLE Tbl_A DROP CONSTRAINT Tbl_A_MyFK
*set the constraint Tbl_A_MyFK without the ON DELETE CASCADE
ALTER TABLE Tbl_A ADD CONSTRAINT Tbl_A_MyFK FOREIGN KEY (MyFK) REFERENCES (Tbl_B)
A: Combining your advice and a script I found online, I made a procedure that will produce SQL you can run to perform a cascaded delete regardless of ON DELETE CASCADE. It was probably a big waste of time, but I had a good time writing it. An advantage of doing it this way is, you can put a GO statement between each line, and it doesn't have to be one big transaction. The original was a recursive procedure; this one unrolls the recursion into a stack table.
create procedure usp_delete_cascade (
@base_table_name varchar(200), @base_criteria nvarchar(1000)
)
as begin
-- Adapted from http://www.sqlteam.com/article/performing-a-cascade-delete-in-sql-server-7
-- Expects the name of a table, and a conditional for selecting rows
-- within that table that you want deleted.
-- Produces SQL that, when run, deletes all table rows referencing the ones
-- you initially selected, cascading into any number of tables,
-- without the need for "ON DELETE CASCADE".
-- Does not appear to work with self-referencing tables, but it will
-- delete everything beneath them.
-- To make it easy on the server, put a "GO" statement between each line.
declare @to_delete table (
id int identity(1, 1) primary key not null,
criteria nvarchar(1000) not null,
table_name varchar(200) not null,
processed bit not null,
delete_sql varchar(1000)
)
insert into @to_delete (criteria, table_name, processed) values (@base_criteria, @base_table_name, 0)
declare @id int, @criteria nvarchar(1000), @table_name varchar(200)
while exists(select 1 from @to_delete where processed = 0) begin
select top 1 @id = id, @criteria = criteria, @table_name = table_name from @to_delete where processed = 0 order by id desc
insert into @to_delete (criteria, table_name, processed)
select referencing_column.name + ' in (select [' + referenced_column.name + '] from [' + @table_name +'] where ' + @criteria + ')',
referencing_table.name,
0
from sys.foreign_key_columns fk
inner join sys.columns referencing_column on fk.parent_object_id = referencing_column.object_id
and fk.parent_column_id = referencing_column.column_id
inner join sys.columns referenced_column on fk.referenced_object_id = referenced_column.object_id
and fk.referenced_column_id = referenced_column.column_id
inner join sys.objects referencing_table on fk.parent_object_id = referencing_table.object_id
inner join sys.objects referenced_table on fk.referenced_object_id = referenced_table.object_id
inner join sys.objects constraint_object on fk.constraint_object_id = constraint_object.object_id
where referenced_table.name = @table_name
and referencing_table.name != referenced_table.name
update @to_delete set
processed = 1
where id = @id
end
select 'print ''deleting from ' + table_name + '...''; delete from [' + table_name + '] where ' + criteria from @to_delete order by id desc
end
exec usp_delete_cascade 'root_table_name', 'id = 123'
A: Go into SQL Server Management Studio and right-click the database. Select Tasks->Generate Scripts. Click Next twice. On the Options window choose set it to generate CREATE statements only, and put everything to False except for the Foreign Keys. Click Next. Select Tables and Click Next again. Click the "Select All" button and click Next then Finish and send the script to your choice of a query window or file (don't use the clipboard, since it might be a big script). Now remove all of the script that adds the tables and you should be left with a script to create your foreign keys.
Make a copy of that script because it is how you'll restore your database to its current state. Use a search and replace to add the ON DELETE CASCADE to the end of each constraint. This might vary depending on how your FKs are currently set up and you might need to do some manual editing.
Repeat the script generation, but this time set it to generate DROP statements only. Be sure to manually remove the table drops that are generated. Run the drops, then run your edited creates to make them all cascade on delete. Do your deletes, run the drop script again and then run the script that you saved off at the start.
Also - MAKE A BACKUP OF YOUR DB FIRST! Even if it's just a dev database, it will save you some headache if part of the script isn't quite right.
Hope this helps!
BTW - you should definitely do some testing with your full test data as another poster suggested, but I can see why you might not need that for initial development. Just don't forget to include that as part of QA at some point.
A: Kevin post is incomplete, his t-sql sp only prints the command, to execute these command, before last end add this
DECLARE @commandText VARCHAR(8000)
DECLARE curDeletes CURSOR FOR
select 'delete from [' + table_name + '] where ' + criteria from @to_delete order by id desc
OPEN curDeletes
FETCH NEXT FROM curDeletes
INTO
@commandText
WHILE(@@FETCH_STATUS=0)
BEGIN
EXEC (@commandText)
FETCH NEXT FROM curDeletes INTO @commandText
END
CLOSE curDeletes
DEALLOCATE curDeletes
A: I usually just hand write the queries to delete the records I don't want and save that as a .sql file for future reference. The pseudocode is:
*
*select id's of records from the main table that I want to delete into a temp table
*write a delete query for each related table which joins to the temp table.
*write a delete query for the main table joining to my temp table.
A: My suggestion is to go ahead and write a script that will add the on delete cascade to each relationship in the database while exporting a list of modified relationships. Then you can reverse the process and remove the on delete cascade command on each table in the list.
A: Personally if you are going to leave the records in production, I would also leave them in development. Otherwise you may write code that works fine when the recordset is small but times out when faced with the real recordset.
But if you are determined to do this, I would copy the id field of the records you want to dete from the main table first to a work table. Then I would take each related table and write a delete joining to that worktable to only delete those records. Finish up with the parent table. Make sure this ia written in a script and saved so the next time you want to do a similar thing to your test data, you can easily run it without having to figure out what are the reated tables that need records deleted from them.
A: Taking the accepted answer a bit further, I had the need to do this across tables in different schemas. I have updated the script to include schema in the outputted delete scripts.
CREATE PROCEDURE usp_delete_cascade (
@base_table_schema varchar(100), @base_table_name varchar(200), @base_criteria nvarchar(1000)
)
as begin
-- Expects the name of a table, and a conditional for selecting rows
-- within that table that you want deleted.
-- Produces SQL that, when run, deletes all table rows referencing the ones
-- you initially selected, cascading into any number of tables,
-- without the need for "ON DELETE CASCADE".
-- Does not appear to work with self-referencing tables, but it will
-- delete everything beneath them.
-- To make it easy on the server, put a "GO" statement between each line.
declare @to_delete table (
id int identity(1, 1) primary key not null,
criteria nvarchar(1000) not null,
table_schema varchar(100),
table_name varchar(200) not null,
processed bit not null,
delete_sql varchar(1000)
)
insert into @to_delete (criteria, table_schema, table_name, processed) values (@base_criteria, @base_table_schema, @base_table_name, 0)
declare @id int, @criteria nvarchar(1000), @table_name varchar(200), @table_schema varchar(100)
while exists(select 1 from @to_delete where processed = 0) begin
select top 1 @id = id, @criteria = criteria, @table_name = table_name, @table_schema = table_schema from @to_delete where processed = 0 order by id desc
insert into @to_delete (criteria, table_schema, table_name, processed)
select referencing_column.name + ' in (select [' + referenced_column.name + '] from [' + @table_schema + '].[' + @table_name +'] where ' + @criteria + ')',
schematable.name,
referencing_table.name,
0
from sys.foreign_key_columns fk
inner join sys.columns referencing_column on fk.parent_object_id = referencing_column.object_id
and fk.parent_column_id = referencing_column.column_id
inner join sys.columns referenced_column on fk.referenced_object_id = referenced_column.object_id
and fk.referenced_column_id = referenced_column.column_id
inner join sys.objects referencing_table on fk.parent_object_id = referencing_table.object_id
inner join sys.schemas schematable on referencing_table.schema_id = schematable.schema_id
inner join sys.objects referenced_table on fk.referenced_object_id = referenced_table.object_id
inner join sys.objects constraint_object on fk.constraint_object_id = constraint_object.object_id
where referenced_table.name = @table_name
and referencing_table.name != referenced_table.name
update @to_delete set
processed = 1
where id = @id
end
select 'print ''deleting from ' + table_name + '...''; delete from [' + table_schema + '].[' + table_name + '] where ' + criteria from @to_delete order by id desc
end
exec usp_delete_cascade 'schema', 'RootTable', 'Id = 123'
exec usp_delete_cascade 'schema', 'RootTable', 'GuidId = ''A7202F84-FA57-4355-B499-1F8718E29058'''
A: Expansion of croisharp's answer to take triggers into consideration, i.e. schema-aware solution that disables all affecting triggers, deletes rows, and enables the triggers.
CREATE PROCEDURE usp_delete_cascade (
@base_table_schema varchar(100),
@base_table_name varchar(200),
@base_criteria nvarchar(1000)
)
as begin
-- Expects the name of a table, and a conditional for selecting rows
-- within that table that you want deleted.
-- Produces SQL that, when run, deletes all table rows referencing the ones
-- you initially selected, cascading into any number of tables,
-- without the need for "ON DELETE CASCADE".
-- Does not appear to work with self-referencing tables, but it will
-- delete everything beneath them.
-- To make it easy on the server, put a "GO" statement between each line.
declare @to_delete table (
id int identity(1, 1) primary key not null,
criteria nvarchar(1000) not null,
table_schema varchar(100),
table_name varchar(200) not null,
processed bit not null,
delete_sql varchar(1000)
)
insert into @to_delete (criteria, table_schema, table_name, processed) values (@base_criteria, @base_table_schema, @base_table_name, 0)
declare @id int, @criteria nvarchar(1000), @table_name varchar(200), @table_schema varchar(100)
while exists(select 1 from @to_delete where processed = 0) begin
select top 1 @id = id, @criteria = criteria, @table_name = table_name, @table_schema = table_schema from @to_delete where processed = 0 order by id desc
insert into @to_delete (criteria, table_schema, table_name, processed)
select referencing_column.name + ' in (select [' + referenced_column.name + '] from [' + @table_schema + '].[' + @table_name +'] where ' + @criteria + ')',
schematable.name,
referencing_table.name,
0
from sys.foreign_key_columns fk
inner join sys.columns referencing_column on fk.parent_object_id = referencing_column.object_id
and fk.parent_column_id = referencing_column.column_id
inner join sys.columns referenced_column on fk.referenced_object_id = referenced_column.object_id
and fk.referenced_column_id = referenced_column.column_id
inner join sys.objects referencing_table on fk.parent_object_id = referencing_table.object_id
inner join sys.schemas schematable on referencing_table.schema_id = schematable.schema_id
inner join sys.objects referenced_table on fk.referenced_object_id = referenced_table.object_id
inner join sys.objects constraint_object on fk.constraint_object_id = constraint_object.object_id
where referenced_table.name = @table_name
and referencing_table.name != referenced_table.name
update @to_delete set
processed = 1
where id = @id
end
select 'print ''deleting from ' + table_name + '...''; delete from [' + table_schema + '].[' + table_name + '] where ' + criteria from @to_delete order by id desc
DECLARE @commandText VARCHAR(8000), @triggerOn VARCHAR(8000), @triggerOff VARCHAR(8000)
DECLARE curDeletes CURSOR FOR
select
'DELETE FROM [' + table_schema + '].[' + table_name + '] WHERE ' + criteria,
'ALTER TABLE [' + table_schema + '].[' + table_name + '] DISABLE TRIGGER ALL',
'ALTER TABLE [' + table_schema + '].[' + table_name + '] ENABLE TRIGGER ALL'
from @to_delete order by id desc
OPEN curDeletes
FETCH NEXT FROM curDeletes INTO @commandText, @triggerOff, @triggerOn
WHILE(@@FETCH_STATUS=0)
BEGIN
EXEC (@triggerOff)
EXEC (@commandText)
EXEC (@triggerOn)
FETCH NEXT FROM curDeletes INTO @commandText, @triggerOff, @triggerOn
END
CLOSE curDeletes
DEALLOCATE curDeletes
end
A: after select you have to build and execute the actual delete
declare @deleteSql nvarchar(1200)
declare delete_cursor cursor for
select table_name, criteria
from @to_delete
order by id desc
open delete_cursor
fetch next from delete_cursor
into @table_name, @criteria
while @@fetch_status = 0
begin
select @deleteSql = 'delete from ' + @table_name + ' where ' + @criteria
--print @deleteSql
-- exec sp_execute @deleteSql
EXEC SP_EXECUTESQL @deleteSql
fetch next from delete_cursor
into @table_name, @criteria
end
close delete_cursor
deallocate delete_cursor
A: Post here a script that will work with foreign keys contain more than one column.
create procedure usp_delete_cascade (
@TableName varchar(200), @Where nvarchar(1000)
) as begin
declare @to_delete table (
id int identity(1, 1) primary key not null,
criteria nvarchar(1000) not null,
table_name varchar(200) not null,
processed bit not null default(0),
delete_sql varchar(1000)
)
DECLARE @MyCursor CURSOR
declare @referencing_column_name varchar(1000)
declare @referencing_table_name varchar(1000)
declare @Sql nvarchar(4000)
insert into @to_delete (criteria, table_name) values ('', @TableName)
declare @id int, @criteria nvarchar(1000), @table_name varchar(200)
while exists(select 1 from @to_delete where processed = 0) begin
select top 1 @id = id, @criteria = criteria, @table_name = table_name from @to_delete where processed = 0 order by id desc
SET @MyCursor = CURSOR FAST_FORWARD
FOR
select referencing_column.name as column_name,
referencing_table.name as table_name
from sys.foreign_key_columns fk
inner join sys.columns referencing_column on fk.parent_object_id = referencing_column.object_id
and fk.parent_column_id = referencing_column.column_id
inner join sys.columns referenced_column on fk.referenced_object_id = referenced_column.object_id
and fk.referenced_column_id = referenced_column.column_id
inner join sys.objects referencing_table on fk.parent_object_id = referencing_table.object_id
inner join sys.objects referenced_table on fk.referenced_object_id = referenced_table.object_id
inner join sys.objects constraint_object on fk.constraint_object_id = constraint_object.object_id
where referenced_table.name = @table_name
and referencing_table.name != referenced_table.name
OPEN @MyCursor
FETCH NEXT FROM @MYCursor
INTO @referencing_column_name, @referencing_table_name
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @referencing_column_name
PRINT @referencing_table_name
update @to_delete set criteria = criteria + ' AND '+@table_name+'.'+@referencing_column_name+'='+ @referencing_table_name+'.'+@referencing_column_name
where table_name = @referencing_table_name
if(@@ROWCOUNT = 0)
BEGIN
--if(@id <> 1)
--BEGIN
insert into @to_delete (criteria, table_name)
VALUES( ' LEFT JOIN '+@table_name+' ON '+@table_name+'.'+@referencing_column_name+'='+ @referencing_table_name+'.'+@referencing_column_name+ @criteria,
@referencing_table_name
)
--END
--ELSE
--BEGIN
--insert into @to_delete (criteria, table_name)
--VALUES( ' LEFT JOIN '+@table_name+' ON '+@table_name+'.'+@referencing_column_name+'='+ @referencing_table_name+'.'+@referencing_column_name,
--@referencing_table_name
--)
--END
END
FETCH NEXT FROM @MYCursor
INTO @referencing_column_name, @referencing_table_name
END
CLOSE @MyCursor
DEALLOCATE @MyCursor
update @to_delete set
processed = 1
where id = @id
end
--select 'print ''deleting from ' + table_name + '...''; delete from [' + table_name + '] where ' + criteria from @to_delete order by id desc
--select id, table_name, criteria, @Where from @to_delete order by id desc
select @id = max(id) from @to_delete
while (@id >= 1)
begin
select @criteria = criteria, @table_name = table_name from @to_delete where id = @id
set @Sql = 'delete [' + @table_name + '] from [' + @table_name + '] ' + @criteria+' WHERE '+@Where
exec (@Sql)
PRINT @Sql
-- Next record
set @id = @id - 1
end
end
A: This script has two issues:
1. You must indicate the condition 1=1 in order to delete all table base.
2. This creates the direct relations with the base table only. If the final table has another table parent relation, the the delete fail
DELETE FROM [dbo].[table2] WHERE TableID in (select [ID] from [dbo].[table3] where 1=1)
If table2 has a parent relation table1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
}
|
Q: Can anyone recommend a simple Java web-app framework? I'm trying to get started on what I'm hoping will be a relatively quick web application in Java, yet most of the frameworks I've tried (Apache Wicket, Liftweb) require so much set-up, configuration, and trying to wrap my head around Maven while getting the whole thing to play nice with Eclipse, that I spent the whole weekend just trying to get to the point where I write my first line of code!
Can anyone recommend a simple Java webapp framework that doesn't involve Maven, hideously complicated directory structures, or countless XML files that must be manually edited?
A: Check out WaveMaker for building a quick, simple webapp. They have a browser based drag-and-drop designer for Dojo/JavaScript widgets, and the backend is 100% Java.
A: Stripes : pretty good. a book on this has come out from pragmatic programmers : http://www.pragprog.com/titles/fdstr/stripes. No XML. Requires java 1.5 or later.
tapestry : have tried an old version 3.x. I'm told that the current version 5.x is in Beta and pretty good.
Stripes should be the better in terms of taking care of maven, no xml and wrapping your head around fast.
BR,
~A
A: Grails is written for Groovy, not Java. AppFuse merely reduces the setup time required to get any number of Webapp frameworks started, rather than promoting any one of them.
I'd suggest Spring MVC. After following the well-written tutorials, you'll have a simple, easy model auto-wired (with no XML configuration!) into any view technology you like.
Want to add a "delete" action to your list of customers? Just add a method named "delete" to your customer controller, and it's autowired to the URL /customers/delete.
Need to bind your request parameters onto an object? Just add an instance of the target object to your method, and Spring MVC will use reflection to bind your parameters, making writing your logic as easy as if the client passed a strongly-typed object to begin with.
Sick of all the forced MVC division of labor? Just have your method return void, and write your response directly to the servlet's Writer, if that's your thing.
A: Haven't tried it myself, but I think
http://www.playframework.org/
has a lot of potential...
coming from php and classic asp, it's the first java web framework that sounds promising to me....
Edit by original question asker - 2011-06-09
Just wanted to provide an update.
I went with Play and it was exactly what I asked for. It requires very little configuration, and just works out of the box. It is unusual in that it eschews some common Java best-practices in favor of keeping things as simple as possible.
In particular, it makes heavy use of static methods, and even does some introspection on the names of variables passed to methods, something not supported by the Java reflection API.
Play's attitude is that its first goal is being a useful web framework, and sticking to common Java best-practices and idioms is secondary to that. This approach makes sense to me, but Java purists may not like it, and would be better-off with Apache Wicket.
In summary, if you want to build a web-app with convenience and simplicity comparable to a framework like Ruby on Rails, but in Java and with the benefit of Java's tooling (eg. Eclipse), then Play Framework is a great choice.
A: I like Spring MVC, using 2.5 features there is very little XML involved.
A: The Stripes Framework is an excellent framework. The only configuration involved is pasting a few lines in your web.xml.
It's a very straight forward request based Java web framework.
A:
Apache Wicket, Liftweb) require so much set-up, configuration
I disagree, I use Wicket for all my projects and never looked back!
it doesn't take much to set up, not even an hour to set up a full environment to work with Wicket..
A: Have a look at Ninja Web Framework.
It is a pure Java MVC framework in the tradition of Rails. It does not use any xml based configuration and has all you need to get started right away: Session management, Security management, html rendering, json rendering and parsing, xml rendering and parsing. It also features a built-in testing environment and is 100% compatible with traditional servlet containers.
It uses Maven, though - but Maven used correctly makes software development super simple. It also allows you to use any Ide right away :)
By the way - developing Ninja is really productive - make changes to your code and see the results immediately.
Check out: http://www.ninjaframework.org.
A: (Updated for Spring 3.0)
I go with Spring MVC as well.
You need to download Spring from here
To configure your web-app to use Spring add the following servlet to your web.xml
<web-app>
<servlet>
<servlet-name>spring-dispatcher</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>spring-dispatcher</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
</web-app>
You then need to create your Spring config file /WEB-INF/spring-dispatcher-servlet.xml
Your first version of this file can be as simple as:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<context:component-scan base-package="com.acme.foo" />
<mvc:annotation-driven />
</beans>
Spring will then automatically detect classes annotated with @Controller
A simple controller is then:
package com.acme.foo;
import java.util.logging.Logger;
import org.springframework.stereotype.Controller;
import org.springframework.ui.ModelMap;
import org.springframework.web.bind.annotation.ModelAttribute;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
@Controller
@RequestMapping("/person")
public class PersonController {
Logger logger = Logger.getAnonymousLogger();
@RequestMapping(method = RequestMethod.GET)
public String setupForm(ModelMap model) {
model.addAttribute("person", new Person());
return "details.jsp";
}
@RequestMapping(method = RequestMethod.POST)
public String processForm(@ModelAttribute("person") Person person) {
logger.info(person.getId());
logger.info(person.getName());
logger.info(person.getSurname());
return "success.jsp";
}
}
And the details.jsp
<%@ taglib uri="http://www.springframework.org/tags/form" prefix="form"%>
<form:form commandName="person">
<table>
<tr>
<td>Id:</td>
<td><form:input path="id" /></td>
</tr>
<tr>
<td>Name:</td>
<td><form:input path="name" /></td>
</tr>
<tr>
<td>Surname:</td>
<td><form:input path="surname" /></td>
</tr>
<tr>
<td colspan="2"><input type="submit" value="Save Changes" /></td>
</tr>
</table>
</form:form>
This is just the tip of the iceberg with regards to what Spring can do...
Hope this helps.
A: I like writing plain old servlets+winstone servlet container. From there I bolt on templating (velocity, XSLT, etc) and DB access (hibernate, torque, etc) libraries as I need them rather than going in for an actual framework.
A: Try Apache Click
It is like Wicket, but much more productive and easy to learn.
A: I am really grooving to Stripes. Total setup includes some cut-and-paste XML into your app's web.xml, and then you're off. No configuration is required, since Stripes is a convention-over-configuration framework. Overriding the default behavior is accomplished via Java 1.5 annotations. Documentation is great. I spent about 1-2 hours reading the tutorial and setting up my first app.
I can't do an in-depth comparison to Struts or Spring-MVC yet, since I haven't built a full-scale in it yet (as I have in Struts), but it looks like it would scale to that level of architecture quite well.
A: Your're searching for http://grails.org/
You code it with groovy, a dynamic language based upon Java and runs smoothly together with Java code, classes and libraries. The syntax is neither hard to learn nor far away from Java. Give it a try, it's some minutes to get a web site up and running. Just follow http://grails.org/Installation and http://grails.org/Quick+Start
Greetz, GHad
A: Tapestry 5 can be setup very quickly using maven archetypes. See the Tapestry 5 tutorial:
http://tapestry.apache.org/tapestry5/tutorial1/
A: I really don't see what is the big deal with getting maven + eclipse to work, as long as you don't have to change the pom.xml too much :)
Most frameworks that user maven have maven archetypes that can generate stub project.
So basically the steps should be:
*
*Install maven
*Add M2_REPO class path variable to eclipse
*Generate project with the archetype
*Import project to eclipse
As for Wicket, there is no reason why you couldn't use it without maven. The nice thing about maven is that it takes care of all the dependencies so you don't have to. On the other hand, if the only thing you want to do is to prototype couple of pages than Wicket can be overkill. But, should your application grow, eventually, the benefits of Wicket would keep showing with each added form, link or page :)
A: The correct answer IMO depends on two things:
1. What is the purpose of the web application you want to write?
You only told us that you want to write it fast, but not what you are actually trying to do. Eg. does it need a database? Is it some sort of business app (hint: maybe search for "scaffolding")? ..or a game? ..or are you just experimenting with sthg?
2. What frameworks are you most familiar with right now?
What often takes most time is reading docs and figuring out how things (really) work. If you want it done quickly, stick to things you already know well.
A: After many painful experiences with Struts, Tapestry 3/4, JSF, JBoss Seam, GWT I will stick with Wicket for now. Wicket Bench for Eclipse is handy but not 100% complete, still useful though. MyEclipse plugin for deploying to Tomcat is ace. No Maven just deploy once, changes are automatically copied to Tomcat. Magic.
My suggestion: Wicket 1.4, MyEclipse, Subclipse, Wicket Bench, Tomcat 6. It will take an hour or so to setup but most of that will be downloading tomcat and the Eclipse plugins.
Hint: Don't use the Wicket Bench libs, manually install Wicket 1.4 libs into project.
This site took me about 2 hours to write http://ratearear.co.uk - don't go there from work!! And this one is about 3 days work http://tnwdb.com
Good luck. Tim
A: The web4j tool markets itself as simple and easy. Some points about it:
*
*uses a single xml file (the web.xml file required by all servlets)
*no dependency on Maven (or any other 3rd party tool/jar)
*full stack, open source (BSD)
*smallest number of classes of any full stack java framework
*SQL placed in plain text files
*encourages use of immutable objects
*minimal toolset required (JSP/JSTL, Java, SQL)
A: Grails is the way to go if you like to do the CRUD easily and create a quick prototype application, plays nice with Eclipse as well. Follow the 'Build your first Grails application' tutorial here http://grails.org/Tutorials and you can be up and running your own application in less than an hour.
A: You can give JRapid a try. Using Domain Driven Design you define your application and it generates the full stack for your web app. It uses known open source frameworks and generates a very nice and ready to use UI.
A: I haven't used it by AppFuse is designed to facilitate the nasty setup that comes with Java Web Development.
A: try Wavemaker http://wavemaker.com Free, easy to use. The learning curve to build great-looking Java applications with WaveMaker isjust a few weeks!
A: Castleframework
http://maven.castleframework.org/nexus/content/repositories/releases/
install using maven.
A: try Vaadin! Very simple and you'll be able to work the UI with ease as well! www.vaadin.com
A: I found a really light weight Java web framework the other day.
It's called Jodd and gives you many of the basics you'd expect from Spring, but in a really light package that's <1MB.
http://jodd.org/
A: Also take a look at activeweb. its simple, lightweight and makes use of a few other things that i like (guice, maven...). Its controllers can serve anything you want including json, Html, plain text, pdfs, images... You can make restful controllers and even use annotations to determine which http methods(POST, GET, ...) a controller method accepts.
A: I would think to stick with JSP, servlets and JSTL
After more than 12 years dealing with web frameworks in several companies I worked with, I always find my self go back to good old JSP.
Yes there are some things you need to write yourself that some frameworks do automatically.
But if you approach it correctly, and build some basic utils on top of your servlets, it gives the best flexibility and you can do what ever you want easily.
I did not find real advantages to write in any of the frameworks. And I keep looking.
Looking at all the answers above also means that there is no real one framework that is good and rules.
A: Have you tried DWR? http://directwebremoting.org
A: Oracle ADF http://www.oracle.com/technology/products/jdev/index.html
A: Recently i found the AribaWeb Framework which looks very promising. It offers good functionality (even AJAX), good documentation. written in Groovy/Java and even includes a Tomcat-Server. Trying to get into Spring really made me mad.
A: I recommend Apache Click as well.
If you pass the test of ten minutes(I think that's the time you will take to read the Quick Start Guide) you won't come back!
Regards,
Gilberto
A: Try this: http://skingston.com/SKWeb
It could do with some more features and improvements, but it is simple and it works.
A: A common property of Java Web-apps is that they usually use servlets which usually means the web server also runs Java. This contributes to the perceived complexity, IMHO. But you can build Java apps in the traditional Unix-style of "do one thing and do it well" without having performance suffer.
You can also use SCGI, it is a lot simpler than FastCGI. I'd try that first. But if it doesn't work out:
How to write a FastCGI application in Java
*
*Make an empty working directory and enter it
*Download the FastCGI devkit: wget --quiet --recursive --no-parent --accept=java --no-directories --no-host-directories "http://www.fastcgi.com/devkit/java/"
*mkdir -p com/fastcgi
*mv *.java com/fastcgi
*Now you need to apply a tiny patch to the devkit (replace operator == with <= on line 175 or use this script to do it):
echo -e "175c\nif (count <= 0) {\n.\nw\nn\nq" | ed -s com/fastcgi/FCGIInputStream.java
*Create a test app, TinyFCGI.java (source below)
*Compile everything: javac **/*.java (** will probably only work in zsh)
*Start the FastCGI server: java -DFCGI_PORT=9884 TinyFCGI (leave it running in the background)
*Now set up e.g. Apache to use the server:
*
*Using Apache 2.4, you can use mod_proxy_fcgi like this:
*
*Using Ubuntu, upgrade to Apache 2.4 using i.e. this PPA
*Enable the mod: sudo a2enmod proxy_fcgi
*Create /etc/apache2/conf-enabled/your_site.conf with the content below
*Restart Apache: sudo apache2ctl restart
*Now you can access the webapp at http://localhost/your_site
*Benchmark results below
TinyFCGI.java
import com.fastcgi.FCGIInterface;
import java.io.*;
import static java.lang.System.out;
class TinyFCGI {
public static void main (String args[]) {
int count = 0;
FCGIInterface fcgiinterface = new FCGIInterface();
while(fcgiinterface.FCGIaccept() >= 0) {
count++;
out.println("Content-type: text/html\n\n");
out.println("<html>");
out.println(
"<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>");
out.println("<body>");
out.println("<H3>FastCGI-HelloJava stdio</H3>");
out.println("request number " + count +
" running on host "
+ System.getProperty("SERVER_NAME"));
out.println("</body>");
out.println("</html>");
}
}
}
your_site.conf
<Location /your_site>
ProxyPass fcgi://localhost:9884/
</Location>
Benchmark results
wrk
$ ./wrk -t1 -c100 -r10000 http://localhost/your_site
Making 10000 requests to http://localhost/your_site
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.58s 13.42s 1.06m 94.42%
Req/Sec 0.00 0.00 0.00 100.00%
10000 requests in 1.42m, 3.23MB read
Socket errors: connect 0, read 861, write 0, timeout 2763
Non-2xx or 3xx responses: 71
Requests/sec: 117.03
Transfer/sec: 38.70KB
ab
$ ab -n 10000 -c 100 localhost:8800/your_site
Concurrency Level: 100
Time taken for tests: 12.640 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 3180000 bytes
HTML transferred: 1640000 bytes
Requests per second: 791.11 [#/sec] (mean)
Time per request: 126.404 [ms] (mean)
Time per request: 1.264 [ms] (mean, across all concurrent requests)
Transfer rate: 245.68 [Kbytes/sec] received
siege
$ siege -r 10000 -c 100 "http://localhost:8800/your_site"
** SIEGE 2.70
** Preparing 100 concurrent users for battle.
The server is now under siege...^C
Lifting the server siege... done.
Transactions: 89547 hits
Availability: 100.00 %
Elapsed time: 447.93 secs
Data transferred: 11.97 MB
Response time: 0.00 secs
Transaction rate: 199.91 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 0.56
Successful transactions: 89547
Failed transactions: 0
Longest transaction: 0.08
Shortest transaction: 0.00
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
}
|
Q: Do C# Generics Have a Performance Benefit? I have a number of data classes representing various entities.
Which is better: writing a generic class (say, to print or output XML) using generics and interfaces, or writing a separate class to deal with each data class?
Is there a performance benefit or any other benefit (other than it saving me the time of writing separate classes)?
A: I did some simple benchmarking on ArrayList's vs Generic Lists for a different question: Generics vs. Array Lists, your mileage will vary, but the Generic List was 4.7 times faster than the ArrayList.
So yes, boxing / unboxing are critical if you are doing a lot of operations. If you are doing simple CRUD stuff, I wouldn't worry about it.
A: There's a significant performance benefit to using generics -- you do away with boxing and unboxing. Compared with developing your own classes, it's a coin toss (with one side of the coin weighted more than the other). Roll your own only if you think you can out-perform the authors of the framework.
A: Generics are one of the way to parameterize code and avoid repetition. Looking at your program description and your thought of writing a separate class to deal with each and every data object, I would lean to generics. Having a single class taking care of many data objects, instead of many classes that do the same thing, increases your performance. And of course your performance, measured in the ability to change your code, is usually more important than the computer performance. :-)
A: According to Microsoft, Generics are faster than casting (boxing/unboxing primitives) which is true.
They also claim generics provide better performance than casting between reference types, which seems to be untrue (no one can quite prove it).
Tony Northrup - co-author of MCTS 70-536: Application Development Foundation - states in the same book the following:
I haven’t been able to reproduce the
performance benefits of generics;
however, according to Microsoft,
generics are faster than using
casting. In practice, casting proved
to be several times faster than using
a generic. However, you probably won’t
notice performance differences in your
applications. (My tests over 100,000
iterations took only a few seconds.)
So you should still use generics
because they are type-safe.
I haven't been able to reproduce such performance benefits with generics compared to casting between reference types - so I'd say the performance gain is "supposed" more than "significant".
A: if you compare a generic list (for example) to a specific list for exactly the type you use then the difference is minimal, the results from the JIT compiler are almost the same.
if you compare a generic list to a list of objects then there is significant benefits to the generic list - no boxing/unboxing for value types and no type checks for reference types.
also the generic collection classes in the .net library were heavily optimized and you are unlikely to do better yourself.
A: In the case of generic collections vs. boxing et al, with older collections like ArrayList, generics are a performance win. But in the vast majority of cases this is not the most important benefit of generics. I think there are two things that are of much greater benefit:
*
*Type safety.
*Self documenting aka more readable.
Generics promote type safety, forcing a more homogeneous collection. Imagine stumbling across a string when you expect an int. Ouch.
Generic collections are also more self documenting. Consider the two collections below:
ArrayList listOfNames = new ArrayList();
List<NameType> listOfNames = new List<NameType>();
Reading the first line you might think listOfNames is a list of strings. Wrong! It is actually storing objects of type NameType. The second example not only enforces that the type must be NameType (or a descendant), but the code is more readable. I know right away that I need to go find TypeName and learn how to use it just by looking at the code.
I have seen a lot of these "does x perform better than y" questions on StackOverflow. The question here was very fair, and as it turns out generics are a win any way you skin the cat. But at the end of the day the point is to provide the user with something useful. Sure your application needs to be able to perform, but it also needs to not crash, and you need to be able to quickly respond to bugs and feature requests. I think you can see how these last two points tie in with the type safety and code readability of generic collections. If it were the opposite case, if ArrayList outperformed List<>, I would probably still take the List<> implementation unless the performance difference was significant.
As far as performance goes (in general), I would be willing to bet that you will find the majority of your performance bottlenecks in these areas over the course of your career:
*
*Poor design of database or database queries (including indexing, etc),
*Poor memory management (forgetting to call dispose, deep stacks, holding onto objects too long, etc),
*Improper thread management (too many threads, not calling IO on a background thread in desktop apps, etc),
*Poor IO design.
None of these are fixed with single-line solutions. We as programmers, engineers and geeks want to know all the cool little performance tricks. But it is important that we keep our eyes on the ball. I believe focusing on good design and programming practices in the four areas I mentioned above will further that cause far more than worrying about small performance gains.
A: Generics are faster!
I also discovered that Tony Northrup wrote wrong things about performance of generics and non-generics in his book.
I wrote about this on my blog:
http://andriybuday.blogspot.com/2010/01/generics-performance-vs-non-generics.html
Here is great article where author compares performance of generics and non-generics:
nayyeri.net/use-generics-to-improve-performance
A: Not only yes, but HECK YES. I didn't believe how big of a difference they could make. We did testing in VistaDB after a rewrite of a small percentage of core code that used ArrayLists and HashTables over to generics. 250% or more was the speed improvement.
Read my blog about the testing we did on generics vs weak type collections. The results blew our mind.
I have started rewriting lots of old code that used the weakly typed collections into strongly typed ones. One of my biggest grips with the ADO.NET interface is that they don't expose more strongly typed ways of getting data in and out. The casting time from an object and back is an absolute killer in high volume applications.
Another side effect of strongly typing is that you often will find weakly typed reference problems in your code. We found that through implementing structs in some cases to avoid putting pressure on the GC we could further speed up our code. Combine this with strongly typing for your best speed increase.
Sometimes you have to use weakly typed interfaces within the dot net runtime. Whenever possible though look for ways to stay strongly typed. It really does make a huge difference in performance for non trivial applications.
A: If you're thinking of a generic class that calls methods on some interface to do its work, that will be slower than specific classes using known types, because calling an interface method is slower than a (non-virtual) function call.
Of course, unless the code is the slow part of a performance-critical process, you should focus of clarity.
A: See Rico Mariani's Blog at MSDN too:
http://blogs.msdn.com/ricom/archive/2005/08/26/456879.aspx
Q1: Which is faster?
The Generics version is considerably
faster, see below.
The article is a little old, but gives the details.
A: Generics in C# are truly generic types from the CLR perspective. There should not be any fundamental difference between the performance of a generic class and a specific class that does the exact same thing. This is different from Java Generics, which are more of an automated type cast where needed or C++ templates that expand at compile time.
Here's a good paper, somewhat old, that explains the basic design:
"Design and Implementation of Generics for the
.NET Common Language Runtime".
If you hand-write classes for specific tasks chances are you can optimize some aspects where you would need additional detours through an interface of a generic type.
In summary, there may be a performance benefit but I would recommend the generic solution first, then optimize if needed. This is especially true if you expect to instantiate the generic with many different types.
A: Not only can you do away with boxing but the generic implementations are somewhat faster than the non generic counterparts with reference types due to a change in the underlying implementation.
The originals were designed with a particular extension model in mind. This model was never really used (and would have been a bad idea anyway) but the design decision forced a couple of methods to be virtual and thus uninlineable (based on the current and past JIT optimisations in this regard).
This decision was rectified in the newer classes but cannot be altered in the older ones without it being a potential binary breaking change.
In addition iteration via foreach on an List<> (rather than IList<>) is faster due to the ArrayList's Enumerator requiring a heap allocation. Admittedly this did lead to an obscure bug
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Is there an equivalent to the XNA framework for consoles other than XBox360? It's gotta be free. It's hobby, after all, not a business!. Creating for-profit software isn't an issue, but anything that requires a hardware mod is out.
A: Nope, I don't think so. The only other .NET environment for consoles I know costs money and is called unity3d: http://unity3d.com/
I think it supports the iPhone and the Wii and uses Mono as runtime environment. 200 bucks and you are in :)
A: No, all of the major consoles, except for the Xbox 360, do not have open development environments. There are various homebrew kits you can get, but these aren't sanctioned by the console makers (Nintendo, Sony, and Microsoft), so at best, you'll only be able to give away ROMs of what you make for free. If you try to sell anything, you'll get sued into the ground.
A: As Adam said, homebrew is pretty much the only way to do what it sounds like you want to do. A lot of times, using homebrew kits also involves modifying the console in some manner.
There is a Linux-based portable game device called the GP2X that might interest you, but I think that open source game development (or at least game development using open source tools) is more of a PC thing.
If you are a student at an accredited university, you can get a free 12-month trial subscription to the XNA Creator's Club through the Dreamspark site.
A: Free and official? No. And XNA isn't free in the respect that you have to pay $99 to use it on the 360.
You're pretty much limited to hacked, homebrew development, coupled with hardware modification. There's at least one console out there where the hardware modification is not required but I'm not sure if we can talk about it.
On a historical note, Sony released something in Japan called Yarouze on the PSX which had a similar situation to the XNA Creators Club on the 360 (paid kit, only for hobbyists) but it never came to the USA.
A: Not as widely documented and supported as XNA, but here are some libraries with documentation/wiki's for the handheld consoles.
*
*GameBoy Advance: HAM
*Nintendo DS: PAlib, devkitPro (the basis for pretty much all homebrew on the DS)
A: You may still be able to find a "PS2 Linux" kit available - but the games you make there will only run on other instances of PS2 Linux - a limited audience.
A: It depends what you classifies as a console. The iPod Touch and the iPhone has got the iPhone SDK with which it should be possible to develop quite good games and when you're done they can easily be distributed through App Store either fer free or for a price of which you will be given 70%.
A: as bhinks mentioned, there's the GP2X, but has been around for a bit, and there's a huge community of homebrew game developers for it. the GP2X has now ceased production, and it has 2 successors on the way, the WIZ by the same company, Game Park and the Pandora which is a proper enthusiast device.
the beauty is you can do games in SDL and build for all devices, including the PC
A: Just pay your $200 for the Unity3D indie license and you can create games for Pc, Mac, the browser, iPhone and Wii. It's arguably a more powerful enigne than XNA because it has built-in collision detection, physics etc
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/116992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Prevent people from pushing a git commit with a different author name? In git, it is up to each user to specify the correct author in their local git config file. When they push to a centralized bare repository, the commit messages on the repository will have the author names that they used when committing to their own repository.
Is there a way enforce that a set of known authors for commits are used? The "central" repository will be accessible via ssh.
I know that this is complicated by the fact that some people may be pushing commits that were made by others. Of course, you should also only allow people you trust to push to your repositories, but it would be great if there was a way to prevent user error here.
Is there a simple solution to this problem in git?
A: We use Gitlab and so it makes sense for us to validate authors against Gitlab group members.
The following script (based on @dsvensson's answer) that should be installed as pre-receive hook does exactly that:
from __future__ import print_function
from __future__ import unicode_literals
import sys
import os
import subprocess
import urllib2
import json
import contextlib
import codecs
from itertools import islice, izip
GITLAB_SERVER = 'https://localhost'
GITLAB_TOKEN = 'SECRET'
GITLAB_GROUP = 4
EMAIL_DOMAIN = 'example.com'
def main():
commits = get_commits_from_push()
authors = get_gitlab_group_members()
for commit, author, email in commits:
if author not in authors:
die('Unknown author', author, commit, authors)
if email != authors[author]:
die('Unknown email', email, commit, authors)
def get_commits_from_push():
old, new, branch = sys.stdin.read().split()
rev_format = '--pretty=format:%an%n%ae'
command = ['git', 'rev-list', rev_format, '{0}..{1}'.format(old, new)]
# branch delete, let it through
if new == '0000000000000000000000000000000000000000':
sys.exit(0)
# new branch
if old == '0000000000000000000000000000000000000000':
command = ['git', 'rev-list', rev_format, new, '--not', '--branches=*']
output = subprocess.check_output(command)
commits = [line.strip() for line in unicode(output, 'utf-8').split('\n') if line.strip()]
return izip(islice(commits, 0, None, 3),
islice(commits, 1, None, 3),
islice(commits, 2, None, 3))
def get_gitlab_group_members():
url = '{0}/api/v3/groups/{1}/members'.format(GITLAB_SERVER, GITLAB_GROUP)
headers = {'PRIVATE-TOKEN': GITLAB_TOKEN}
request = urllib2.Request(url, None, headers)
with contextlib.closing(urllib2.urlopen(request)) as response:
members = json.load(response)
return dict((member['name'], '{}@{}'.format(member['username'], EMAIL_DOMAIN))
for member in members)
def die(reason, invalid_value, commit, authors):
message = []
message.append('*' * 80)
message.append("ERROR: {0} '{1}' in {2}"
.format(reason, invalid_value, commit))
message.append('-' * 80)
message.append('Allowed authors and emails:')
print('\n'.join(message), file=sys.stderr)
for name, email in authors.items():
print(u" '{0} <{1}>'".format(name, email), file=sys.stderr)
sys.exit(1)
def set_locale(stream):
return codecs.getwriter('utf-8')(stream)
if __name__ == '__main__':
# avoid Unicode errors in output
sys.stdout = set_locale(sys.stdout)
sys.stderr = set_locale(sys.stderr)
# you may want to skip HTTPS certificate validation:
# import ssl
# if hasattr(ssl, '_create_unverified_context'):
# ssl._create_default_https_context = ssl._create_unverified_context
main()
See GitLab custom Git hooks docs for installation instructions.
Only get_gitlab_group_members() is Gitlab-specific, other logic applies to any pre-receive hook (including handling branch deletions and creations).
The script is now available in GitHub, please feel free to send pull requests for any mistakes/improvements.
A: Use the PRE-RECEIVE hook (see githooks(5) for details). There you get old sha and new sha for each ref updated. And can easily list the changes and check that they have proper author (git rev-list --pretty=format:"%an %ae%n" oldsha..newsha).
Here is an example script:
#!/bin/bash
#
# This pre-receive hooks checks that all new commit objects
# have authors and emails with matching entries in the files
# valid-emails.txt and valid-names.txt respectively.
#
# The valid-{emails,names}.txt files should contain one pattern per
# line, e.g:
#
# ^.*@0x63.nu$
# ^allowed@example.com$
#
# To just ensure names are just letters the following pattern
# could be used in valid-names.txt:
# ^[a-zA-Z ]*$
#
NOREV=0000000000000000000000000000000000000000
while read oldsha newsha refname ; do
# deleting is always safe
if [[ $newsha == $NOREV ]]; then
continue
fi
# make log argument be "..$newsha" when creating new branch
if [[ $oldsha == $NOREV ]]; then
revs=$newsha
else
revs=$oldsha..$newsha
fi
echo $revs
git log --pretty=format:"%h %ae %an%n" $revs | while read sha email name; do
if [[ ! $sha ]]; then
continue
fi
grep -q -f valid-emails.txt <<<"$email" || {
echo "Email address '$email' in commit $sha not registred when updating $refname"
exit 1
}
grep -q -f valid-names.txt <<<"$name" || {
echo "Name '$name' in commit $sha not registred when updating $refname"
exit 1
}
done
done
A: We use the following to prevent accidental unknown-author commits (for example when doing a fast commit from a customer's server or something). It should be placed in .git/hooks/pre-receive and made executable.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import subprocess
from itertools import islice, izip
import sys
old, new, branch = sys.stdin.read().split()
authors = {
"John Doe": "john.doe@example.com"
}
proc = subprocess.Popen(["git", "rev-list", "--pretty=format:%an%n%ae%n", "%s..%s" % (old, new)], stdout=subprocess.PIPE)
data = [line.strip() for line in proc.stdout.readlines() if line.strip()]
def print_error(commit, author, email, message):
print "*" * 80
print "ERROR: Unknown Author!"
print "-" * 80
proc = subprocess.Popen(["git", "rev-list", "--max-count=1", "--pretty=short", commit], stdout=subprocess.PIPE)
print proc.stdout.read().strip()
print "*" * 80
raise SystemExit(1)
for commit, author, email in izip(islice(data, 0, None, 3), islice(data, 1, None, 3), islice(data, 2, None, 3)):
_, commit_hash = commit.split()
if not author in authors:
print_error(commit_hash, author, email, "Unknown Author")
elif authors[author] != email:
print_error(commit_hash, author, email, "Unknown Email")
A: What you could do is create a bunch of different user accounts, put them all in the same group and give that group write access to the repository. Then you should be able to write a simple incoming hook that checks if the user that executes the script is the same as the user in the changeset.
I've never done it because I trust the guys that check code into my repositories, but if there is a way, that's probably the one explained above.
A: git wasn't initially designed to work like svn with a big central repository. Perhaps you can pull from people as needed, and refuse to pull if they have their author set inaccurately?
A: If you want to manage rights to an internet facing git repo, I suggest you look at Gitosis rather than whipping up your own. Identity is provided by private/public key pairs.
Read me pimping it here, too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: How best to test the validity of XML from a method? I have some WCF methods that are used to transmit information from a server application to a website frontend for use in binding. I'm sending the result as an XElement that is a root of an XML tree containing the data I want to bind against.
I'd like to create some tests that examine the data and ensure it comes across as expected.
My current thinking is this: Every method that returns an XElement tree has a corresponding schema (.XSD) file. This file is included within the assembly that contains my WCF classes as an embedded resource.
Tests call the method on these methods and compares the result against these embedded schemas.
Is this a good idea? If not, what other ways can I use to provide a "guarantee" of what kind of XML a method will return?
If it is, how do you validate an XElement against a schema? And how can I get that schema from the assembly it's embedded in?
A: If you're doing some light-weight work and XSDs are overkill, consider also possibly strongly typing your XML data. For example, I have a number of classes in a project that derive from XElement. One is ExceptionXElement, another is HttpHeaderXElement, etc. In them, I inherit from XElement and add Parse and TryParse methods that take strings containing XML data to create an instance from. If TryParse() returns false, the string does not conform to the XML data I expect (the root element has the wrong name, missing children elements, etc.).
For example:
public class MyXElement : XElement
{
public MyXElement(XElement element)
: base(element)
{ }
public static bool TryParse(string xml, out MyXElement myElement)
{
XElement xmlAsXElement;
try
{
xmlAsXElement = XElement.Parse(xml);
}
catch (XmlException)
{
myElement = null;
return false;
}
// Use LINQ to check if xmlAsElement has correct nodes...
}
A: Id say validating xml with a xsd schema is a good idea.
How to validate a XElement with the loaded schema:
As you see in this example you need to validate the XDocument first to get populate the "post-schema-validation infoset" (There might be a solution to do this without using the Validate method on the XDOcument but Im yet to find one):
String xsd =
@"<xsd:schema xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
<xsd:element name='root'>
<xsd:complexType>
<xsd:sequence>
<xsd:element name='child1' minOccurs='1' maxOccurs='1'>
<xsd:complexType>
<xsd:sequence>
<xsd:element name='grandchild1' minOccurs='1' maxOccurs='1'/>
<xsd:element name='grandchild2' minOccurs='1' maxOccurs='2'/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>";
String xml = @"<?xml version='1.0'?>
<root>
<child1>
<grandchild1>alpha</grandchild1>
<grandchild2>beta</grandchild2>
</child1>
</root>";
XmlSchemaSet schemas = new XmlSchemaSet();
schemas.Add("", XmlReader.Create(new StringReader(xsd)));
XDocument doc = XDocument.Load(XmlReader.Create(new StringReader(xml)));
Boolean errors = false;
doc.Validate(schemas, (sender, e) =>
{
Console.WriteLine(e.Message);
errors = true;
}, true);
errors = false;
XElement child = doc.Element("root").Element("child1");
child.Validate(child.GetSchemaInfo().SchemaElement, schemas, (sender, e) =>
{
Console.WriteLine(e.Message);
errors = true;
});
How to read the embedded schema from an assembly and add it to the XmlSchemaSet:
Assembly assembly = Assembly.GetExecutingAssembly();
// you can use reflector to get the full namespace of your embedded resource here
Stream stream = assembly.GetManifestResourceStream("AssemblyRootNamespace.Resources.XMLSchema.xsd");
XmlSchemaSet schemas = new XmlSchemaSet();
schemas.Add(null, XmlReader.Create(stream));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to Retrieve name of current Windows User (AD or local) using Python? How can I retrieve the name of the currently logged in user, using a python script? The function should work regardless of whether it is a domain/ad user or a local user.
A: as in https://stackoverflow.com/a/842096/611007 by Konstantin Tenzin
Look at getpass module
import getpass
getpass.getuser()
Availability: Unix, Windows
Note "this function looks at the values of various environment
variables to determine the user name. Therefore, this function should
not be relied on for access control purposes (or possibly any other
purpose, since it allows any user to impersonate any other)."
at least, definitely preferable over getenv.
A: Try this:
import os;
print os.environ.get( "USERNAME" )
That should do the job.
A: win32api.GetUserName()
win32api.GetUserNameEx(...)
See:
http://timgolden.me.uk/python/win32_how_do_i/get-the-owner-of-a-file.html
A: I don't know Python, but for windows the underlying api is GetUserNameEx, I assume you can call that in Python if os.environ.get( "USERNAME" ) doesn't tell you everything you need to know.
A: Pretty old question but to refresh the answer to the original question "How can I retrieve the name of the currently logged in user, using a python script?" use:
import os
print (os.getlogin())
Per Python documentation: getlogin - Return the name of the user logged in on the controlling terminal of the process.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: How do I get Firefox to launch Visio when I click on a linked .vsd file? On our intranet site, we have various MS Office documents linked. When I click on a Word, Excel or PowerPoint file, Firefox gives me the option to Open, Save or Cancel. When I click on Open, the appropriate app is launched and the file is loaded. This is perfect. But for some reason, when I click on a linked Visio file, I only get the option to Save, which is inconvenient.
I know that Firefox knows the linked file is a Visio file because it tells me so in the dialog box: "You have chosen to open example.vsd which is a: Microsoft Visio Drawing".
How can I make Firefox launch Visio when I click on a linked Visio file?
Update:
Firefox is not launching Visio when I click on a linked Visio file because the web server does not identify the content-type correctly. It identifies the Visio file as application/octet-stream instead of application/x-visio. (Thanks Grant Wagner.) This explains why it doesn't work. And in my case, I may be able to get the Apache config file changed, but this is not certain.
However, I would love to know if there is a way to configure Firefox itself to launch Visio based on some other criteria, like file name extension. This way I can open Visio files even if I don't have access to the Apache configuration.
A: @Dean
There are only two buttons in the dialog box: "Save File" and "Cancel". The "Open with" option is not there at all.
But I think I know what you mean. Sometimes, the "Open with" option is grayed out and only becomes available a moment later. Unfortunately that's not the case here.
If Open With is not available, the most likely cause is that Firefox does not know the MIME type of the document and is assuming it is application/octet-stream, or your web server is serving up files that end in .vnd as application/octet-stream (or some other binary-only MIME type).
To confirm this, download LiveHTTPHeaders and use it to confirm that the MIME type of the file is application/x-visio.
A: Edit the file %appdata%\Mozilla\Firefox\Profiles\your profile\mimeTypes.rdf
Add in the following
<RDF:li RDF:resource="urn:mimetype:application/vnd.visio"/>
<RDF:Description RDF:about="urn:mimetype:externalApplication:application/vnd.visio"
NC:prettyName="VISIO.EXE"
NC:path="FULL PATH TO YOUR VISIO\VISIO.EXE" />
<RDF:Description RDF:about="urn:mimetype:application/vnd.visio"
NC:value="application/vnd.visio"
NC:editable="true"
NC:fileExtensions="vsd"
NC:description="Microsoft Visio Drawing">
<NC:handlerProp RDF:resource="urn:mimetype:handler:application/vnd.visio"/>
</RDF:Description>
<RDF:Description RDF:about="urn:mimetype:handler:application/vnd.visio"
NC:alwaysAsk="false">
<NC:externalApplication RDF:resource="urn:mimetype:externalApplication:application/vnd.visio"/>
<NC:possibleApplication RDF:resource="urn:handler:local:FULL PATH TO YOUR VISIO\VISIO.EXE"/>
</RDF:Description>
This is working for me under Firefox 3.6.3 under Windows XP SP2
A: Added extension 'OpenDownload' which resolved the issue.
A: Go under Tools, Options.. in firefox, then when the options box comes up go to applications, there you can set all extensions and launch conditions. Actually it's termed "Content Type" and "Action" there...
A: If the behavior is similar to opening an Application, all you need to do is click the Open/Save dialog and the Open button will become available about a second later. Does this help?
A: Going under Tools | Options... doesn't seem to work, as after doing so you get an error that an unknown error occurred opening the file.
However, if you install the OpenDownload extension, then you get a run button which successfully runs Visio.
A: THe problem is with the VSD File type.
Open Windows Explorer
Menu / Tools / Folder options
Click on the File Type TAB
Locate the VSD file type (just type v s d > it will get You there)
There are two Buttons: [Modify] and [Special] -- Click on the [Special] button
--- The Actions associated with the file are listed
You have to add the Open option:
Add the path to Visio as follows:
"C:\Program Files[## correct PATH##]\VISIO.EXE" /e
(Just check how an other filetype is setup, e.g.: DOC or XLS)
Also there is the option: Browse in same window.
Uncheck the Browse in same window. checkbox,
Click [OK]
and there You go! The browser should ask if You want to open or download the file.
and one You mark Your option and remove the checkbox from "Always Ask for this Filetype..." Your VSD Document should open directly in Visio.
Hope this Helps, BR, Zoltan Gajdatsy
A: Step-by-step:
*
*On Firefox, go to a site with the file, right-click on a vsd or vsdx file and select download.
*On the download window, mark remember my choice option.
*Go under tools > options > application, search for visio type and change dropbox to "open with" and then, localize the application you wish use.
I tested this on Firefox 33.0.2 accessing files in Sharepoint.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the best way for a Java program to monitor system health? I would like to be able to monitor my major system health indicators from inside our Java-based system. Major points of interest include CPU temperature, motherboard temperature, fan speed, etc.
Is there a package available that:
*
*Makes this sort of data available to Java?
*Works on Windows or Linux or both?
*Is open / free / cheap?
A: The closest thing you'll find is the Hyperic Sigar library:
http://www.hyperic.com/products/sigar.html
It doesn't get down to temperatures AFAIK but does show you a number of native stats like CPU, memory, disk I/O, network I/O, etc. It's ported to most of the architectures people are deploying Java on today. License is GPL although I think you can buy commercial licenses as well.
We use Sigar with Terracotta for cluster monitoring and have been very happy with it.
A: I believe most of this sort of thing is usually done over SNMP (for professional system) is the usual way to make this sort of information available in a standards-based manner. JMX is also available over SNMP. The question then becomes, which are the better SNMP libraries for Java (and does your system support it)?
A: There are MIBs supported by both Windows and Linux that expose the parameters you are looking for via SNMP. Also, most major vendors have special MIBs published for their server hardware.
I have implemented SNMP MIBs and monitoring for Java applications using the commercial iReasoning SNMP API and they worked great. There is also the open source SNMP4J, which I don't personally have experience with, but looks pretty good.
So, for your needs, you would turn on the publishing of SNMP information for the hosts you want to monitor. No coding necessary. This is just a configuration issue.
For CPU temperature, for example, you must enable the MIB LM-SENSORS-MIB. Under Linux you can use the snmpwalk client to take a look at OID .1.3.6.1.4.1.2021.13.16.2.1.3
to see CPU temperature. Once you have that up and you know it's publishing data correctly, you can begin to implement your real monitoring solution.
You can use a Java SNMP library to poll and subscribe to SNMP traps for the hosts you want to monitor. You could also use any commercial or open-source monitoring tool (Google for SNMP console).
A: A few months ago I looked for such a library and I found nothing interesting. It is not impossible to create one, so I would recommend doing so. You'll probably need to access native libraries, to do that use JNA (it's easier than JNI). Start by adding support for a few things on one platform, then start adding support for other features and platforms.
The share it with us! People will starts using it, maybe even help with development and soon you'll have a fully featured system monitoring library for Java.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Licensing and using the Linux kernel I would like to write my own OS, and would like to temporarily jump over the complicated task of writing the kernel and come back to it later by using the Linux kernel in the mean time. However, I would like to provide the OS as closed source for now. What license is the Linux kernel under and is it possible to use it for release with a closed source OS?
Edit: I am not interested in closing the source of the Linux kernel, I would still provide that as open sourced. I am wondering if I could use a closed source OS with an open source kernel.
Further edit: By OS, I mean the system that runs on top of the kernel and is used to launch other programs. I certainly did not mean to include the kernel in the closed source statement.
A: The Linux kernel is released under the GPLv2 and you can use it as part of a closed-source OS but you have to keep the kernel and all modifications released GPLv2.
Edit: Btw, you may want to use something like OpenSolaris instead. It's much easier to work with, in my opinion (obviously very subjective), and you can keep modifications closed-source if you so choose, so long as you follow the terms of the CDDL.
A: I think you're going to have to be more specific about what you mean by 'OS'. It's by no means a clear concept. Some would say that the kernel is all of the OS. Others would say that the shell and core utilities such as 'ls' are part of the OS. Others would go as far as to say that standard applications such as Notepad are part of the OS.
IANAL, but I don't believe there's anything to stop you from bundling the Linux kernel with a load of closed-source programs of your own. Take care not to use any GPL library code however (LGPL is OK).
I do question your motives.
A: It's GPL version 2 and you may certainly not close its source.
A: You must keep the source open, and any works derived from the code, however, if you use the Kernel, write your own application stack on top of that (pretty much ALL the GNU stuff) then you don't have to open that up.
The GPL says that "derived" works... so if you're writing new code, instead of expanind on, then that's fine. In fact, you could even, for example, use the GNU toolchain, the Linux Kernel, and then have your own system on top of that (or just a DE) that is closed source.
It's when you modify/derive from something that you have to keep it open!
A: You can of course write whatever closed-source OS over the Linux kernel that you like provided you are compatible with the licensing of components you link against.
Of course that's likely to include the gnu C library (or some other C library). You may also need some command line utilities which will probably be GPL to do things such as filesystem maintenance, network setup etc. But provided you leave those as their own standalone programs, it should not be a problem.
Anything that you link into the kernel itself (e.g. custom modules, patches) should be released as open source GPL to comply with the kernel's licence.
A: Linux has the GPL (v2) as its licence, which means you have to open source any derivative works.
You may want to use BSD, its license is a lot les restrictive in what you can do with derived works
A: If the filesystem you use is to be linked into the kernel itself, and if you plan to distribute it to others, the GPL pretty unambiguously requires that the filesystem be GPL'ed as well.
That being said: one way to legally interface Linux with a GPL-incompatible filesystem is via FUSE (filesystem in userspace). This has been used, for example, to run the GPL-incompatible ZFS filesystem on top of Linux. Running a filesystem in userspace does, however, carry a performance penalty that may be significant.
A: It is GPL. Short answer -- no.
A: You can always keep any extensions (modules) and/or applications you write closed source, but the kernel itself will need to remain open source.
There's a not-so-obvious aspect of the GPLv2 that you can exploit while testing the system: you only need to release source code to those who have access to the system. The GPLv2 states that you need to give full access to the source code to anyone with access to the binary/compiled distribution of the program. So, if you are only going to use the software inside of the company that is paying to develop it, you don't need to distribute the source code to the rest of the world, but just them.
A: Generally I would say that you're allowed to do such a thing, as long as you provide the source for the kernel, but there's one point where I'm unsure:
On a normal Linux system between the (GPL) kernel and a non-GPL compatible application, there is always the GNU libc, which is LGPL and thus allows derived works that are non-free. Now, if you have a non-free libc, that might be considered a derived work, since you are directly calling the kernel, and also using kernel headers.
As many others have said before, you might be better off using a *BSD.
A: If you're serious in developing a new operating system and want a working kernel to start with I suggest that you look into the FreeBSD kernel. It has a much more relaxed license than Linux, I think you might find it worthwhile.
Just my 2 cents...
A: I agree with MarkR but nobody has stated the obvious to you. If you are serious, you need to consult a lawyer with expertise in this area.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: ASP.NET Master Page in separate assembly I have some ASP.NET Master Pages located in one assembly. I need to use these Master Pages with my WebForm pages located in other assemblies that have a reference to the first assembly. I cannot seem to figure out a way to do this.
Is there a nice way that I can do this?
If no pretty solution exists I would also like to hear about possible hacks?
A: Master Pages are based upon Usercontrols, therefore they cannot be shared across applications.
That said, Dan Wahlin has a way around that particular limitation listed on his blog.
A: Sharing Master Pages Across IIS Applications by Dan Wahlin will give you some ideas..
Also, check this excellent article by Scott Allen. You can check the last section at the end on sharing master pages.
A: There are no pretty solutions to this problem.
What you can do is put it into a separate web application project and pre-compile it. Then take the precompiled dlls and ILMerge them into a single dll that you can reference in your app.
The actual master page that you reference will have a class name like this: MyMasterPage_aweknk so look it up in reflector. The class that looks like "MyMasterPage" is really just the code-behind.
It's really disgusting, but it's what we've done to reuse user controls in multiple applications.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Error Message Text - Best Practices We are changing some of the text for our old, badly written error messages. What are some resources for best practices on writing good error messages (specifically for Windows XP/Vista).
A: In terms of wording your error messages, I recommend referring to the following style guides for Windows applications:
*
*Windows user experience guidelines, and specifically the section on error messages here.
*Microsoft Manual of Style
A: The ultimate best practice is to prevent the user from causing errors in the first place.
Don't tell users anything they don't care about; error code 5064 doesn't mean a thing to anyone. Don't tell them they did something wrong; disallow it in the first place. Don't blame them, especially not for mistakes your software made. Above all, when there is a problem, tell them how to fix it so they can move on and get some work done.
A: A good error message should:
*
*Be unobtrusive (no blue-screen or yellow-screen of death)
*Give the user direction to correct the problem (on their own if possible, or who to contact for help)
*Hide useless, esoteric programmer nonsense ( don't say, "a null reference exception occurred on line 45")
*Be descriptive without being verbose. Just enough information to tell the user what they need to know and nothing more.
One thing I've started to do is to generate a unique number that I display in the error message and write to the log file so I can find the error in the log when the user sends me a screenshot or calls and says, "I got an error. It says my reference number is 0988-7634"
A: For security reasons, don't provide internal system information that the user does not need.
Trivial example: when failing to login, don't tell the user if the username is wrong or the password is wrong; this will only help the attacker to brute force the system. Instead, just say "Username/Password combination is invalid" or something like that.
A: Always include suggestions to Remedy the error.
A: Try to figure out a way to write your software so it corrects the problem for them.
A: For any user input (strings, filenames, values, etc), always display the erroneous value with delimiters around it (quotes, brackets, etc). e.g.
The filename you entered could not be found: "somefile.txt"
This helps to show any whitespace/carriage returns that may have sneaked in and greatly reduces troubleshooting and frustration.
A: *
*Avoid identical error messages coming from different places; parametrize with file:line if possible, or use other context that lets you, the developer, uniquely identify where the error occurred.
*Design the mechanism to allow easy localization, especially if it is a commercial product.
*If the error messages are user-visible, make them complete, meaningful sentences that don't assume intimate knowledge of the code; remember, you're always too close to the problem -- the user is not. If possible, give the user guidance on how to proceed, who to contact, etc.
*Every error should have a message if possible; if not, then try and make sure that all error-unwind paths eventually reach an error message that sheds light on what happened.
I'm sure there will be other good answers here...
A: Shorter messages may actually be read.
The longer your error message, the less the user will read. That being said, try to refactor the code so you can eliminate exceptions if there is an obvious response. Try to only have exceptions that happen based on things beyond your user or your code's control.
The best exception message is the one you never have to display.
A: Error handling is always better than error reporting, but since you are retrofitting the error messages and not necessarily the code here's a couple of suggestions:
Users want solutions, not problems. Help them know what to do after an error, even if the message is as simple as "Please close the current window and retry your action."
I am also a big fan of centralized logging of errors. Make sure the log is both human and computer scanable. Users don't always let you know what problems they are having, especially if they can be 'worked around', so the log can help you know what things need fixed.
If you can control the error dialog easily, having a dialog which shows a nice, readable message with a 'details' button to show the error number, trace, etc. can be a big help for real-time problem solving as well.
A: Support for multilanguage applies for all kinds of messages, but tends to be forgotten in the case of error messages.
A: I would second not telling the user useless esoteric information like numeric error codes. I would follow that up however by saying to definitely log that information for troubleshooting by more technically savvy people.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: What is the Best Way to Dynamically Create an MSMQ Listener? I was using Spring.NET to create a message listener at application startup. I now need the ability to create 0-n listeners, some of which could be listening to different queues. I will need to be able to create/destroy them at run-time. What would be the best way to go about doing this? My previous spring configuration was something like:
<objects xmlns="http://www.springframework.net">
<object id='inputQueue' type='Spring.Messaging.Support.MessageQueueFactoryObject, Spring.Messaging'>
<property name='Path' value='.\Private$\inputQueue'/>
</object>
<!-- MSMQ Transaction Manager -->
<object id="messageQueueTransactionManager" type="Spring.Messaging.Core.MessageQueueTransactionManager, Spring.Messaging"/>
<!-- Message Listener Container that uses MSMQ transactional for receives -->
<object id="transactionalMessageListenerContainer" type="Spring.Messaging.Listener.TransactionalMessageListenerContainer, Spring.Messaging">
<property name="MessageQueueObjectName" value="inputQueue"/>
<property name="PlatformTransactionManager" ref="messageQueueTransactionManager"/>
<property name="MaxConcurrentListeners" value="10"/>
<property name="MessageListener" ref="messageListenerAdapter"/>
</object>
<!-- Adapter to call a PONO as a messaging callback -->
<object id="messageListenerAdapter" type="Spring.Messaging.Listener.MessageListenerAdapter, Spring.Messaging">
<property name="HandlerObject" ref="messageListener"/>
</object>
<!-- The PONO class that you write -->
<object id="messageListener" type="Com.MyCompany.Listener, Com.MyCompany">
<property name="Container" ref="transactionalMessageListenerContainer"/>
</object>
</objects>
Should I just programmatically do what this file is doing each time I need a new listener? Should I not use Spring.NET's messaging framework for this?
A: If I'm using Spring.NET judiciously (i.e. messaging, ADO.NET helpers, etc.) as you already stated, I would do it programmatically. And then add the new graph to my existing container. This way Spring.NET still manages lifecycle for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is the best way to generate and print invoices in a .NET application? I'm working on a billing system for a utility company, and we have the following requirements:
*
*Batch-generate and print approximately 1,500 bills per day that we then mail to customers
*Save the bill in a format that can emailed to the customer and also archived (probably PDF)
*Built with .NET with MS SQL Server database back-end
I'd like some advice about the best way to accomplish this. I'm thinking about building a WPF application that would have the bill template that we bind the bill data to and print from. But I haven't used WPF before so I'm not sure if that's the best technology to use, and I can't get my head around how the batching and PDF conversion would work.
Any thoughts? Would WPF work, or is there a better solution?
A: If you are using a SQL Server backend, Reporting Services should work for you.
Otherwise I would recommend a third-party report generator that fits your reporting needs and create an app that uses it to create & export the reports.
A: Even with SQL Server you may want to look at the client side reports functionality. It really fits better IMO with what you want. You can still query and get all the data you need form the server, but it allows you to have complete control over the automation process. Maybe you want to run it as a service, every day the report is generated by the service, converted to PDF and copied to disk and auto emailed. The client side can do all that and easily. And there is no reliance on reporting services or IIS, or having to have any of that configured.
A: I strongly recommend working with a reporting tool that provides native support for exporting to PDF, it is much easier for management purposes if you can start with a single format and report to handle both the printing and archiving of information.
If you are truly doing batch processing, I wouldn't see WPF as a needed component as a batch job you don't really have much of a UI, if any at all, depending on how you truly implement this.
If I were you I would focus on creating a batch processor that could be either running as a windows service, or scheduled to run at specific intervals to accomplish its job.
A: You can get a good printing features out of WPF since the new technology Paper format XPS document is a replacement for PDF. And it has great programming support too. A blog from pettzold http://www.charlespetzold.com/blog/2006/02/201111.html regarding WPF printing.
A: Maybe you should try with ActiveReports.NET or DevExpress XtraReports to generate the reports first by code. Both have PDF export support so you can generate PDF files and send them by mail.
A: Check out this book, (http://www.apress.com/book/view/9781590598542) it gives many different scenarios, including emailing reports, report generation service, etc. Its regarding client side reporting, but it applies equally well to server side (design side anyways). this may have its advantages doing it client side (or dediacted server) as you can fully control the automation process. But thats if you want to go with .NET reporting.
And yes you can use WPF.
A: You could also look at itextSharp. Its a .net pdf writing tool and is a port of the Java itext. The limited playing I did with it made pdf writing simple and fun.
A: On one of the projects I work on we use list & label.
Basically you have a .NET API, you pass it a DataSet and then you make templates referencing the columns in the dataset, which can at least be printed (and I suppose exported to PDF too but didn't check...)
I didn't work with it myself tho so can't say much about quality.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to improve performance writing to the database? We log values and we only log them once in a table. When we add values to the table we have to do a look up everytime to see if it needs to insert the value or just grab the id. We have an index on the table (not on the primary key) but there are about 350,000 rows (so it is taking 10 seconds to do 10 of these values).
So either
*
*We figure out a way to optimize it
*Strip it this feature out or
*Do something completely different when logging these values.
A: Just to be clear, the index is on the (presumably varchar or nvarchar) field in the table, correct? Not the PK?
ok, after your edit: You're doing an indexed lookup on a large (n)varchar text field. Even with the index that can be pretty slow -- you're still doing 2 big string comparisons. I can't really thing of a great way to do this, but some initial SWAGS:
*
*compute a hash of the to-be-logged text, and store that in the database for subsequent lookups
*as another poster suggested, store all of the rows, and filter out dupes in the query (or with a nightly batch, whatever
*don't check for duplicates. Catching an exception may still be cheaper than the lookup*
*hire someone with a really good memory who's fast with a mouse. When a message is going to be logged, flash it to their screen with an accept/deny prompt. If the entry is a dupe, ahve them click "deny"
* yeah, I know I'll be down-modded for that, but sometimes pragmatism just works.
A: How frequently do you write to the table vs. reading from it. If you have frequent writes and occasional reads, consider just always doing inserts and then handle collapsing the values when doing a select.
If you're trying to put everything in one table, consider breaking them out into separate tables to cut down on size, or barring that use partitions on the table.
A: It's taking 1 second to do an indexed lookup on a 350k-row table? That sounds really rather unnecessarily slow to me.. Are you sure there isn't something else wrong?
A: Without seeing your actual queries I can only generalize. However, I'd offer the following ideas/advice:
1) Have you verified that your index is indeed being used for the lookup query? If it were an index with a high cardinality, it should be much faster.
2) You could combine the 2 operations into a single stored procedure which first looked for the row and then did an insert if necessary....something like:
IF EXISTS (SELECT ID FROM YourTable WHERE ID = @ID_to_look_for)
@ID_exists = 1
ELSE
@ID_exists = 0
If you post what the exact queries look like, maybe I can offer a more detailed answer.
A: Instead of doing a lookup just try inserting the value. If the table is designed to refuse duplicate records, i.e. it has a primary key or unique index, then the insert will error. Simply trap for the insert error and if it is received then grab the id as you normally would.
I agree that the lookup should not be taking that long but why make the engine parse the query, map out a path, do the lookup and then send you the results before you insert when it could do both at the same time.
You could also look into:
*
*indexing better, assuming there is room for improvement
*Altering the physical layout of the database to improve IO
*Increasing the memory available to SQL Server
A: First of all, look at the query plan to see what it is doing. This will tell you if it is using the index. One second for a single row test/insert is too slow. For 350k rows this is long enough for it to do a table scan over a cached table.
Second. Look at the physical layout of your server. Do you have something like logs and data sharing the same disk?
Thirdly, check that the index columns on your unique key are in the same order as the predicate on the select query. Differences in order may confuse the query optimizer.
Fourthly, consider a clustered index on the unique key. If this is your main mode of looking up the row it will reduce the disk accesses as the table data is physically stored with the clustered indexes. See This for a blurb about clustered indexes. Set the table up with a generous fill factor.
Unless you have blob columns, 350k rows is way below the threshold where partitioning should make a difference. This size table should fit entirely in the cache.
A: I'm not sure I have enough informaiton to answer this, but here are some thoughts none the less:
*
*If you are not already doing so you may be able to do the insert and the verfication all in one SQL (insert into table (values) (select lefter outer join to table where id is null)
*Are you using a DAL layer, or stored procedures to do this? Do you control the SQL used to select/insert? If you don't you may want to user SQL Profiler to examine what is being sent to the DB incase it's format invalidates the index.
A: "When we add values to the table we have to do a look up everytime to see if it needs to insert the value or just grab the id."
We used to call this the "upsert" operation.
try:
UPDATE log SET blah blah blah WHERE key = key;
except Missing Key:
INSERT INTO log(...) VALUES(...);
We never did our own query to see if the key existed, since that's the job of the UPDATE statement.
A: Are you by chance using a cursor? It shouldn't take ten seconds on a table that small to do what you said you were doing.
You need set-based update and insert statements.
A: *
*Rule out connectivity and driver issues - ensure other operations on the same database performed in the same manner are fast enough
*Make sure you measure this operation independently from other ops that might be running within the same transaction
*Make sure you have no lock scenarios - stop everything else and just execute your lookup and update sequence from your management tool.
*Check if the lookup is more costly (99%) or the disk write is costly - though 10 secs is way too high even for a slow disk. Do this for the completeness sake.
*Check if your index is being used by the query - table scans might be happening.
*If the columns used for Index is a text field, check if the text indexing is at the root of the issue by issuing the lookups on a non text column which has an index on it. If so try to change the logic to use the PK or use a hash instead of the text.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: When have we any practical use for hierarchical namespaces in c++? I can understand the use for one level of namespaces. But 3 levels of namespaces. Looks insane. Is there any practical use for that? Or is it just a misconception?
A: Hierarchical namespaces do have a use in that they allow progressively more refined definitions. Certainly a single provider may produce two classes with the same name. Often the first level is occupied by the company name, the second specifies the product, the third (and possibly more) my provide the domain.
There are also other uses of namespace segregation. One popular situation is placing the base classes for a factory pattern in its own namespace and then derived factories in their own namespaces by provider. E.g. System.Data, System.Data.SqlClient and System.Data.OleDbClient.
A: Obviously it's a matter of opinion. But it really boils down to organization. For example, I have a project which has a plugin api that has functions/objects which look something like this:
plugins::v1::function
When 2.0 is rolled out they will be put into the v2 sub-namespace. I plan to only deprecate but never remove v1 members which should nicely support backwards compatibility in the future. This is just one example of "sane" usage. I imagine some people will differ, but like I said, it's a matter of opinion.
A: Big codebases will need it. Look at boost for an example. I don't think anyone would call the boost code 'insane'.
If you consider the fact that at any one level of a hierarchy, people can only comprehend somewhere very roughly on the order of 10 items, then two levels only gives you 100 maximum. A sufficiently big project is going to need more, so can easily end up 3 levels deep.
A: I work on XXX application in my company yyy, and I am writing a GUI subsystem. So I use yyy::xxx::gui as my namespace.
A: You can easily find yourself in a situation when you need more than one level. For example, your company has a giant namespace for all of its code to separate it from third party code, and you are writing a library which you want to put in its own namespace. Generally, whenever you have a very large and complex system, which is broken down hierarchically, it is reasonable to use several namespace levels.
A: It depends on your needs and programming style. But one of the benefits of namespace is to help partition name space (hence the name). With a single namespace, as your project is increases in size and complexity, so does the likelihood of name-collision.
If you're writing code that's meant to be shared or reused, this becomes even more important.
A: I agree for applications. Most people that use multiple levels of namespaces (in my experience) come from a Java or .NET background where the noise is significantly less. I find that good class prefixes can take the place of multiple levels of namespaces.
But I have seen good use of multiple namespace levels in boost (and other libraries). Everything is in the boost namespace, but libraries are allowed (encouraged?) to be in their own namespace. For example - boost::this_thread namespace. It allows things like...
boost::this_thread::get_id()
boost::this_thread::interruption_requested()
"this_thread" is just a namespace for a collection of free functions. You could do the same thing with a class and static functions (i.e. the Java way of defining a free function), but why do something unnatural when the language has a natural way of doing it?
A: Just look at the .Net base class library to see a namespace hierarchy put to good use. It goes four or five levels deep in a few places, but mostly it's just two or three, and the organization is very nice for finding things.
A: The bigger the codebase the bigger the need for hierarchical namespaces. As your project gets bigger and bigger you find you need to break it out in ways to make it easier to find stuff.
For instance we currently use a 2 level hierarchy. However some of the bigger portions we are now talking about breaking them out into 3 levels.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What is the scope for imported classes in python? Please excuse the vague title. If anyone has a suggestion, please let me know! Also please retag with more appropriate tags!
The Problem
I want to have an instance of an imported class be able to view things in the scope (globals, locals) of the importer. Since I'm not sure of the exact mechanism at work here, I can describe it much better with snippets than words.
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
Then run this code from an iteractive session, there there will be lots of NameErrors
## interactive
class C2(object):
def do_eval(self,x): # maybe this should be do_evil, given what happens
print "evaling"
eval(x)
eval(x,globals(),locals())
def f2():
print "go f2!"
from file1 import C1
import file1
C1().do_eval('file1.f1()')
C1().do_eval('f1()')
C1().do_eval('f2()')
file1.C1().do_eval('file1.f1()')
file1.C1().do_eval('f1()')
file1.C1().do_eval('f2()')
C2().do_eval('f2()')
C2().do_eval('file1.f1()')
C2().do_eval('f1()')
Is there a common idiom / pattern for this sort of task? Am I barking up the wrong tree entirely?
A: In this example, you can simply hand over functions as objects to the methods in C1:
>>> class C1(object):
>>> def eval(self, x):
>>> x()
>>>
>>> def f2(): print "go f2"
>>> c = C1()
>>> c.eval(f2)
go f2
In Python, you can pass functions and classes to other methods and invoke/create them there.
If you want to actually evaluate a code string, you have to specify the environment, as already mentioned by Thomas.
Your module from above, slightly changed:
## File 1
def f1(): print "go f1!"
class C1(object):
def do_eval(self, x, e_globals = globals(), e_locals = locals()):
eval(x, e_globals, e_locals)
Now, in the interactive interpreter:
>>> def f2():
>>> print "go f2!"
>>> from file1 import * # 1
>>> C1().do_eval("f2()") # 2
NameError: name 'f2' is not defined
>>> C1().do_eval("f2()", globals(), locals()) #3
go f2!
>>> C1().do_eval("f1()", globals(), locals()) #4
go f1!
Some annotations
*
*Here, we insert all objects from file1 into this module's namespace
*f2 is not in the namespace of file1, therefore we get a NameError
*Now we pass the environment explictly, and the code can be evaluated
*f1 is in the namespace of this module, because we imported it
Edit: Added code sample on how to explicitly pass environment for eval.
A: Functions are always executed in the scope they are defined in, as are methods and class bodies. They are never executed in another scope. Because importing is just another assignment statement, and everything in Python is a reference, the functions, classes and modules don't even know where they are imported to.
You can do two things: explicitly pass the 'environment' you want them to use, or use stack hackery to access their caller's namespace. The former is vastly preferred over the latter, as it's not as implementation-dependent and fragile as the latter.
You may wish to look at the string.Template class, which tries to do something similar.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: MS SSRS Report Builder: Semantic query execution failed.? I have created an end user model and deployed it. Any report that I create and run gives me an error:
Report execution error
The report might not be valid or the server
could not process the data.
Semantic query execution failed. Invalid column name 'rowguid'.
Query execution failed for data set 'dataSet'.
An error has occurred during report processing.
Most of the tables contain a primary key named, rowguid. I cannot remove these from the data source views, but I did go in and removed them from the model. This made no difference.
TIA
Daniel
Update
The data source was in a folder for all of the reporting data sources. As part of my testing/debugging I created a data source in the folder containing the model and the error went away. I intend to initiate an MS support incident about this and will post the update here.
A: Try Creating a view that either does not include that column. Once you have done that recreate your data source views and model to be based on this view instead of the raw table and retry creating the report in Report Builder
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What are best practices to implement security when using NHibernate? Traditionalist argue that stored procedures provide better security than if you use a Object Relational Mapping (ORM) framework such as NHibernate.
To counter that argument what are some approaches that can be used with NHibernate to ensure that proper security is in place (for example, preventing sql injection, etc.)?
(Please provide only one approach per answer)
A: Protect your connection strings.
As of .NET 2.0 and NHibernate 1.2, it is easy to use encrypted connection strings (and other application settings) in your config files. Store your connection string in the <connectionStrings> block, then use the NHibernate connection.connection_string_name property instead of connection.connection_string. If you're running a web site and not a Windows app, you can use the aspnet_regiis command line tool to encrypt the <connectionStrings> block, while leaving the rest of your NHibernate settings in plaintext for easy editing.
Another strategy is to use Integrated Authentication for your database connection, if your database platform supports it. That way, you're (hopefully) not storing credentials in plaintext in your config file.
A: Actually, NHibernate can be vulnerable to SQL injection if you use SQL or HQL to construct your queries. Make sure that you use parameterized queries if you need to do this, otherwise you're setting yourself up for a world of pain.
A: Use a dedicated, locked-down SQL account
A: One of the arguments I've heard in favor of sprocs over ORM is that they don't want people to do whatever they want in the database. They disallow select/insert/update/delete on the tables themselves. Every action is controlled through a procedure which is reviewed by a DBA. I can understand where this thinking comes from... especially when you have a bunch of amateurs all with their hands in your database.
But times have changed and NHibernate is different. It's incredibly mature. In most cases it will write better SQL than your DBA :).
You still have to protect yourself from doing something stupid. As spiderman says "with great power comes great responsibility"
I think it's much more appropriate to give NHibernate the proper access to the database and control actions through other means, such as audit logging and regular backups. If someone were to do something stupid, you can always recover.
A: http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx
A: Most ORM's handle SQL injection by creating parameterized queries. In NHibernate, if you are using LINQ to NHibernate or the Criteria/Query over methods of writing queries, the queries are automatically parameterized, if you are dynamically creating HQL/SQL queries yourself you are more vunerable and would have to keep in mind that your queries would have to be parameterized.
A: OWASP mentions one form of SQL injection vulnerability in the context of ORM tools (and gives HQL injection as an example): http://www.owasp.org/index.php/Interpreter_Injection#ORM_Injection
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Avoid Deletions of Files using TortoiseSVN I am setting up Apache server with TortoiseSVN for a local source code repository.
I have observed that if someone deletes a file using TortoiseSVN it will get deleted from everyone's view and the only log will be their in log history. I feel it is dangerous that any user can delete a file if they have read/write rights (accidentally or intentionally, and though the file remains on the server it is risky and can create panic in others).
Is there a way that only admins can add or delete files?
A: Adding and deleting files are common operations in source management. You should not have to protect you against that.
Anyway, subversion (and any VCS actually) allows you to retrieve accidentally deleted files by doing copies or merging. This page from the Subversion documentation explains how to recover deleted files. You should look under the "Undoing changes" and "Resurrecting Deleted Items"
A: You can set up a pre-commit hook on your repository to control what users can add or delete files.
A: One of the beauties of source control is that it doesn't matter if someone deletes a file. If they delete a file and check it in and it should not have been deleted, just revert their revision. Simple as that.
A: I would recommend that you read a book about version control, preferably the Version Control with Subversion. What you describe is not a problem, this is how version control works.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Resources that have to be manually cleaned up in C#? What resources have to be manually cleaned up in C# and what are the consequences of not doing so?
For example, say I have the following code:
myBrush = new System.Drawing.SolidBrush(System.Drawing.Color.Black);
// Use Brush
If I don't clean up the brush using the dispose method, I'm assuming the garbage collector frees the memory used at program termination? Is this correct?
What other resources do I need to manually clean up?
A: *
*Handles to internal windows data structures.
*Database connections.
*File handles.
*Network connections.
*COM/OLE references.
The list goes on.
It's important to call Dispose or even better yet, use the using pattern.
using (SolidBrush myBrush = new System.Drawing.SolidBrush(System.Drawing.Color.Black))
{
// use myBrush
}
If you don't dispose something, it'll be cleaned up when the garbage collector notices that there are no more references to it, which may be after some time.
In the case of System.Drawing.Brush, Windows will keep internal windows structures for the brush loaded in memory until all programs release their handle.
A: If you don't dispose something, it'll be cleaned up when the garbage collector notices that there are no more references to it in your code, which may be after some time. For something like that, it doesn't really matter, but for an open file it probably does.
In general, if something has a Dispose method, you should call it when you've finished with it, or, if you can, wrap it up in a using statement:
using (SolidBrush myBrush = new System.Drawing.SolidBrush(System.Drawing.Color.Black))
{
// use myBrush
}
A: The consequences of not disposing your IDisposables can vary from a negligible performance hit to crashing your app.
The Brush object in your example will be cleaned up by the GC when it feels like it. But your program won't have had the benefit of that bit of extra memory you would have gained by cleaning it up earlier. If you are using a lot of Brush objects this might become significant. The GC is also more efficient at cleaning up objects if they haven't been around very long, because it is a generational garbage collector.
On the other hand, the consequences of not disposing database connection objects could mean you run out of pooled database connections very quickly and cause your app to crash.
Either use
using (new DisposableThing...
{
...
}
Or, if you need to hold on to a reference to an IDisposable in your object for its lifetime, implement IDisposable on your object and call the IDisposable's Dispose method.
class MyClass : IDisposable
{
private IDisposable disposableThing;
public void DoStuffThatRequiresHavingAReferenceToDisposableThing() { ... }
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
//etc... (see IDisposable on msdn)
}
A: Generally, anything that implements IDisposable should cause you to pause and research the resource you're using.
GC only happens when there's memory pressure, so you cannot predict when. Although an unload of the AppDomain will certainly trigger it.
A: Technically anything that inherits from IDisposable should be proactively disposed. You can use the 'using' statement to make things easier.
http://msdn.microsoft.com/en-us/library/yh598w02.aspx
Sometimes you will see inconsistent use of IDisposable derived objects in documentation sample code as well as code that is generated by tools (i.e. visual studio).
What's nice about IDisposable is that it gives you the ability to proactively release the underlying unmanaged resource. Sometimes you really want to do this - think network connections and file resources for example.
A: As others have said, using is your friend.
I wrote this blog entry about how to implement IDisposable in a fairly straightforward way that is less error-prone by factoring out the parts that are the most important.
A: A trick I use when I can't remember whether a given object is a disposable resource is to type ".Dispose" (at most!) after the declaration to get Intellisense to check for me:
MemoryStream ms = new MemoryStream().Dispose
Then delete the .Dispose and use the using() directive:
using(MemoryStream ms = new MemoryStream())
{
...
}
A: Well, as long as you use the managed version of the resources and don't call the windows APIs by yourself, you should be OK. Only worry about having to delete/destroy a resource when what you get is an IntPtr, as "windows handles" (and a whole lot other things) are known in .NET, and not an object.
By the way, the resource (as any other .NET object) will be flagged for collection as soon as you leave the current context, so if you create the Brush inside a method, it will be flagged when you exit it.
A: If it's managed (i.e. part of the framework) you don't need to worry about it. If it implements IDisposable just wrap it in a using block.
If you want to use unmanaged resources then you need to read up on finalisers and implementing IDisposable yourself.
There's a lot more detail under this question
A: First upon program termination, you can assume that memory used by the process will be eliminated with the process itself.
While using dispose or destructor in.net, one must understand that the time of when the dispose function is called by the GC is non-deterministic. That why it is recommended to use the using or calling the dispose explicitly.
When using resources such as files, memory objects such as semaphors and resources that live outside of the managed world of .net must be freed.
The SolidBrush for example, you need to dispose because it is a GDI object and living outside of the .net world.
A: The garbage collector does not only free up at program termination, otherwise it would not be really useful (on any decent/recent OS, when the process exits, all its memory is cleaned up automatically by the OS anyway).
One of the big advantage of C# compared to C/C++ is that you don't have to care about freeing allocated objects (most of the time at least); the gc does it when the runtime decides (various strategies when/how to do it).
Many ressources are not taken care of by the gc: file, thread-related ressources (locks), network connections, etc...
A: One place to be careful is Objects that look small to GC but are not... In the SharePoint API for example, the SPWeb object has a small footprint as far as the GC is concerned and so will have low priority for collection, but it has really grabbed a bunch of memory (in the heap I believe) that the GC doesn't know about. You will run into some fun memory issues if you are foreaching a whole bunch of these for example, always remember to use using or dispose!
A: Rather than thinking of an object as "holding" resources that need to be released, it's better to think in terms of an object as having altered something (possibly outside the computer!) which will outlive it, in a way could be harmful if it not undone or "cleaned up", but which only the object can clean up. While this alteration commonly takes the form of some concrete object in a pool being marked "busy", its precise form doesn't matter. What matters is that the changes need to be undone, and the object holds information necessary to do that.
A: The garbage collector will handle any managed resources. In your example, the brush will be cleaned up when the garbage collector decides to, which will happen some time after the last reference to the brush is no longer valid.
There are certain things that need to be manually cleaned up, but those are pointers retrieved from unmanaged sources, such as DLL calls, nothing within the .NET Framework needs this treatment however.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Ruby and WS-Security I'm having troubles finding good Ruby libraries that implement WS-Security. I've seen wss4r but have yet to use it (and the documentation is a bit light on it). What libraries do you use for this task, or is there a better alternative?
A: I don't work with soap much myself, but this ruby extension is on my list of things to try: here. Might want to check it out.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Can I (re)map Ex commands in vim? I love vim and the speed it gives me. But sometimes, my fingers are too speedy and I find myself typing :WQ instead of :wq. (On a German keyboard, you have to press Shift to get the colon :.) Vim will then complain that WQ is Not an editor command.
Is there some way to make W and Q editor commands?
A: Try
:command WQ wq
:command Wq wq
:command W w
:command Q q
This way you can define your own commands. See :help command for more information.
A: Alternative way to do it:
Use 'command abbreviations'
:ca WQ wq
A: And you can use
:cmap WQ wq
as well.
E.g. I have
cmap h tab help
in my .vimrc which means opening help pages in a new tab.
Thanks for the tip Jim Stewart:
But here is a much better solution as the above (for the help mapping,
so that it only applies when you do :h):
cnoreabbrev <expr> h getcmdtype() == ":" && getcmdline() == "h" ? "tab h" : "h"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
}
|
Q: Python and POST data In PHP I simply write:
$bob = $_POST['bob'];
How do I do the same in Python?
And yes, I do normally check that it exists etc, I'm just stripping it down specifically to the functionality I am after.
Edit: I am not using a framework
A: The simplest method is the 'cgi' module:
import cgi
data = cgi.FieldStorage()
data['bob']
But the context you are executing in (frameworks you're using, WSGI or even (heaven forbid) mod_python) may have different, more efficient or more direct methods of access.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Design by contract using assertions or exceptions? When programming by contract a function or method first checks whether its preconditions are fulfilled, before starting to work on its responsibilities, right? The two most prominent ways to do these checks are by assert and by exception.
*
*assert fails only in debug mode. To make sure it is crucial to (unit) test all separate contract preconditions to see whether they actually fail.
*exception fails in debug and release mode. This has the benefit that tested debug behavior is identical to release behavior, but it incurs a runtime performance penalty.
Which one do you think is preferable?
See releated question here
A: Asserts are for catching something a developer has done wrong (not just yourself - another developer on your team also). If it's reasonable that a user mistake could create this condition, then it should be an exception.
Likewise think about the consequences. An assert typically shuts down the app. If there is any realistic expectation that the condition could be recovered from, you should probably use an exception.
On the other hand, if the problem can only be due to a programmer error then use an assert, because you want to know about it as soon as possible. An exception might be caught and handled, and you would never find out about it. And yes, you should disable asserts in the release code because there you want the app to recover if there is the slightest chance it might. Even if the state of your program is profoundly broken the user just might be able to save their work.
A: It is not exactly true that "assert fails only in debug mode."
In Object Oriented Software Construction, 2nd Edition by Bertrand Meyer, the author leaves a door open for checking preconditions in release mode. In that case, what happens when an assertion fails is that... an assertion violation exception is raised! In this case, there is no recovery from the situation: something useful could be done though, and it is to automatically generate an error report and, in some cases, to restart the application.
The motivation behind this is that preconditions are typically cheaper to test than invariants and postconditions, and that in some cases correctness and "safety" in the release build are more important than speed. i.e. For many applications speed is not an issue, but robustness (the ability of the program to behave in a safe way when its behaviour is not correct, i.e. when a contract is broken) is.
Should you always leave precondition checks enabled? It depends. It's up to you. There is no universal answer. If you're making software for a bank, it might be better to interrupt execution with an alarming message than to transfer $1,000,000 instead of $1,000. But what if you're programming a game? Maybe you need all the speed you can get, and if someone gets 1000 points instead of 10 because of a bug that the preconditions didn't catch (because they're not enabled), tough luck.
In both cases you should ideally have catched that bug during testing, and you should do a significant part of your testing with assertions enabled. What is being discussed here is what is the best policy for those rare cases in which preconditions fail in production code in a scenario which was not detected earlier due to incomplete testing.
To summarize, you can have assertions and still get the exceptions automatically, if you leave them enabled - at least in Eiffel. I think to do the same in C++ you need to type it yourself.
See also: When should assertions stay in production code?
A: Disabling assert in release builds is like saying "I will never have any issues whatsoever in a release build", which is often not the case. So assert shouldn't be disabled in a release build. But you don't want the release build crashing whenever errors occur either, do you?
So use exceptions and use them well. Use a good, solid exception hierarchy and ensure that you catch and you can put a hook on exception throwing in your debugger to catch it, and in release mode you can compensate for the error rather than a straight-up crash. It's the safer way to go.
A: I outlined my view on the state of the matter here: How do you validate an object's internal state? . Generally, assert your claims and throw for violation by others. For disabling asserts in release builds, you can do:
*
*Disable asserts for expensive checks (like checking whether a range is ordered)
*Keep trivial checks enabled (like checking for a null pointer or a boolean value)
Of course, in release builds, failed assertions and uncaught exceptions should be handled another way than in debug builds (where it could just call std::abort). Write a log of the error somewhere (possibly into a file), tell the customer that an internal error occurred. The customer will be able to send you the log-file.
A: The principle I follow is this: If a situation can be realistically avoided by coding then use an assertion. Otherwise use an exception.
Assertions are for ensuring that the Contract is being adhered to. The contract must be fair, so that client must be in a position to ensure it complies. For example, you can state in a contract that a URL must be valid because the rules about what is and isn't a valid URL are known and consistent.
Exceptions are for situations that are outside the control of both the client and the server. An exception means that something has gone wrong, and there's nothing that could have been done to avoid it. For example, network connectivity is outside the applications control so there is nothing that can be done to avoid a network error.
I'd like to add that the Assertion / Exception distinction isn't really the best way to think about it. What you really want to be thinking about is the contract and how it can be enforced. In my URL example above that best thing to do is have a class that encapsulates a URL and is either Null or a valid URL. It is the conversion of a string into a URL that enforces the contract, and an exception is thrown if it is invalid. A method with a URL parameter is much clearer that a method with a String parameter and an assertion that specifies a URL.
A: There was a huge thread regarding the enabling/disabling of assertions in release builds on comp.lang.c++.moderated, which if you have a few weeks you can see how varied the opinions on this are. :)
Contrary to coppro, I believe that if you are not sure that an assertion can be disabled in a release build, then it should not have been an assert. Assertions are to protect against program invariants being broken. In such a case, as far as the client of your code is concerned there will be one of two possible outcomes:
*
*Die with some kind of OS type failure, resulting in a call to abort. (Without assert)
*Die via a direct call to abort. (With assert)
There is no difference to the user, however, it's possible that the assertions add an unnecessary performance cost in the code that is present in the vast majority of runs where the code doesn't fail.
The answer to the question actually depends much more on who the clients of the API will be. If you are writing a library providing an API, then you need some form of mechanism to notify your customers that they have used the API incorrectly. Unless you supply two versions of the library (one with asserts, one without) then assert is very unlikely the appropriate choice.
Personally, however, I'm not sure that I would go with exceptions for this case either. Exceptions are better suited to where a suitable form of recovery can take place. For example, it may be that you're trying to allocate memory. When you catch a 'std::bad_alloc' exception it might be possible to free up memory and try again.
A: The rule of thumb is that you should use assertions when you are trying to catch your own errors, and exceptions when trying to catch other people's errors. In other words, you should use exceptions to check the preconditions for the public API functions, and whenever you get any data that are external to your system. You should use asserts for the functions or data that are internal to your system.
A: you're asking about the difference between design-time and run-time errors.
asserts are 'hey programmer, this is broken' notifications, they're there to remind you of bugs you wouldn't have noticed when they happened.
exceptions are 'hey user, somethings gone wrong' notifications (obviously you can code to catch them so the user never gets told) but these are designed to occur at run time when Joe user is using the app.
So, if you think you can get all your bugs out, use exceptions only. If you think you can't..... use exceptions. You can still use debug asserts to make the number of exceptions less of course.
Don't forget that many of the preconditions will be user-supplied data, so you will need a good way of informing the user his data was no good. To do that, you'll often need to return error data down the call stack to the bits he is interacting with. Asserts will not be useful then - doubly so if your app is n-tier.
Lastly, I'd use neither - error codes are far superior for errors you think will occur regularly. :)
A: I prefer the second one. While your tests may have run fine, Murphy says that something unexpected will go wrong. So, instead of getting an exception at the actual erroneous method call, you end up tracing out a NullPointerException (or equivalent) 10 stack frames deeper.
A: The previous answers are correct: use exceptions for public API functions. The only time you might wish to bend this rule is when the check is computationally expensive. In that case, you can put it in an assert.
If you think violation of that precondition is likely, keep it as an exception, or refactor the precondition away.
A: You should use both. Asserts are for your convenience as a developer. Exceptions catch things you missed or didn't expect during runtime.
I've grown fond of glib's error reporting functions instead of plain old asserts. They behave like assert statements but instead of halting the program, they just return a value and let the program continue. It works surprisingly well, and as a bonus you get to see what happens to the rest of your program when a function doesn't return "what it's supposed to". If it crashes, you know that your error checking is lax somewhere else down the road.
In my last project, I used these style of functions to implement precondition checking, and if one of them failed, I would print a stack trace to the log file but keep on running. Saved me tons of debugging time when other people would encounter a problem when running my debug build.
#ifdef DEBUG
#define RETURN_IF_FAIL(expr) do { \
if (!(expr)) \
{ \
fprintf(stderr, \
"file %s: line %d (%s): precondition `%s' failed.", \
__FILE__, \
__LINE__, \
__PRETTY_FUNCTION__, \
#expr); \
::print_stack_trace(2); \
return; \
}; } while(0)
#define RETURN_VAL_IF_FAIL(expr, val) do { \
if (!(expr)) \
{ \
fprintf(stderr, \
"file %s: line %d (%s): precondition `%s' failed.", \
__FILE__, \
__LINE__, \
__PRETTY_FUNCTION__, \
#expr); \
::print_stack_trace(2); \
return val; \
}; } while(0)
#else
#define RETURN_IF_FAIL(expr)
#define RETURN_VAL_IF_FAIL(expr, val)
#endif
If I needed runtime checking of arguments, I'd do this:
char *doSomething(char *ptr)
{
RETURN_VAL_IF_FAIL(ptr != NULL, NULL); // same as assert(ptr != NULL), but returns NULL if it fails.
// Goes away when debug off.
if( ptr != NULL )
{
...
}
return ptr;
}
A: I tried synthesising several of the other answers here with my own views.
Use assertions for cases where you want to disable it in production, erring toward leaving them in. The only real reason to disable in production, but not in development, is to speed up the program. In most cases, this speed up won't be significant, but sometimes code is time critical or the test is computationally expensive. If code is mission critical, then exceptions may be best despite the slow down.
If there is any real chance of recovery, use an exception as assertions aren't designed to be recovered from. For example, code is rarely designed to recover from programming errors, but it is designed to recover from factors such as network failures or locked files. Errors should not be handled as exceptions simply for being outside the control of the programmer. Rather, the predictability of these errors, compared to coding mistakes, makes them more amiable to recovery.
Re argument that it is easier to debug assertions: The stack trace from a properly named exception is as easy to read as an assertion. Good code should only catch specific types of exceptions, so exceptions should not go unnoticed due to being caught. However, I think Java sometimes forces you to catch all exceptions.
A: The rule of thumb, to me, is that use assert expressions to find internal errors and exceptions for external errors. You can benefit much from the following discussion by Greg from here.
Assert expressions are used to find programming errors: either errors in the program's logic itself or in errors in its corresponding implementation. An assert condition verifies that the program remains in a defined state. A "defined state" is basically one that agrees with the program's assumptions. Note that a "defined state" for a program need not be an "ideal state" or even "a usual state", or even a "useful state" but more on that important point later.
To understand how assertions fit into a program, consider a routine in
a C++ program that is about to dereference a pointer. Now should the
routine test whether the pointer is NULL before the dereferencing, or
should it assert that the pointer is not NULL and then go ahead and
dereference it regardless?
I imagine that most developers would want to do both, add the assert,
but also check the pointer for a NULL value, in order not to crash
should the asserted condition fail. On the surface, performing both the
test and the check may seem the wisest decision
Unlike its asserted conditions, a program's error handling (exceptions) refers not
to errors in the program, but to inputs the program obtains from its
environment. These are often "errors" on someone's part, such as a user
attempting to login to an account without typing in a password. And
even though the error may prevent a successful completion of program's
task, there is no program failure. The program fails to login the user
without a password due to an external error - an error on the user's
part. If the circumstances were different, and the user typed in the
correct password and the program failed to recognize it; then although
the outcome would still be the same, the failure would now belong to
the program.
The purpose of error handling (exceptions) is two fold. The first is to communicate
to the user (or some other client) that an error in program's input has
been detected and what it means. The second aim is to restore the
application after the error is detected, to a well-defined state. Note
that the program itself is not in error in this situation. Granted, the
program may be in a non-ideal state, or even a state in which can do
nothing useful, but there is no programming errorl. On the contrary,
since the error recovery state is one anticipated by the program's
design, it iss one that the program can handle.
PS: you may want to check out the similar question: Exception Vs Assertion.
A: See also this question:
I some cases, asserts are disabled when building for release. You may
not have control over this (otherwise, you could build with asserts
on), so it might be a good idea to do it like this.
The problem with "correcting" the input values is that the caller will
not get what they expect, and this can lead to problems or even
crashes in wholly different parts of the program, making debugging a
nightmare.
I usually throw an exception in the if-statement to take over the role
of the assert in case they are disabled
assert(value>0);
if(value<=0) throw new ArgumentOutOfRangeException("value");
//do stuff
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "130"
}
|
Q: Try-catch every line of code without individual try-catch blocks I do not currently have this issue, but you never know, and thought experiments are always fun.
Ignoring the obvious problems that you would have to have with your architecture to even be attempting this, let's assume that you had some horribly-written code of someone else's design, and you needed to do a bunch of wide and varied operations in the same code block, e.g.:
WidgetMaker.SetAlignment(57);
contactForm["Title"] = txtTitle.Text;
Casserole.Season(true, false);
((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true;
Multiplied by a hundred. Some of these might work, others might go badly wrong. What you need is the C# equivalent of "on error resume next", otherwise you're going to end up copying and pasting try-catches around the many lines of code.
How would you attempt to tackle this problem?
A: Fail Fast
To elaborate, I guess I am questioning the question. If an exception is thrown, why would you want your code to simply continue as if nothing has happened? Either you expect exceptions in certain situations, in which case you write a try-catch block around that code and handle them, or there is an unexpected error, in which case you should prefer your application to abort, or retry, or fail. Not carry on like a wounded zombie moaning 'brains'.
A: This is one of the things that having a preprocessor is useful for. You could define a macro that swallows exceptions, then with a quick script add that macro to all lines.
So, if this were C++, you could do something like this:
#define ATTEMPT(x) try { x; } catch (...) { }
// ...
ATTEMPT(WidgetMaker.SetAlignment(57));
ATTEMPT(contactForm["Title"] = txtTitle.Text);
ATTEMPT(Casserole.Season(true, false));
ATTEMPT(((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true);
Unfortunately, not many languages seem to include a preprocessor like C/C++ did.
You could create your own preprocessor and add it as a pre-build step. If you felt like completely automating it you could probably write a preprocessor that would take the actual code file and add the try/catch stuff in on its own (so you don't have to add those ATTEMPT() blocks to the code manually). Making sure it only modified the lines it's supposed to could be difficult though (have to skip variable declarations, loop constructs, etc to that you don't break the build).
However, I think these are horrible ideas and should never be done, but the question was asked. :)
Really, you shouldn't ever do this. You need to find what's causing the error and fix it. Swallowing/ignoring errors is a bad thing to do, so I think the correct answer here is "Fix the bug, don't ignore it!". :)
A: On Error Resume Next is a really bad idea in the C# world. Nor would adding the equivalent to On Error Resume Next actually help you. All it would do is leave you in a bad state which could cause more subtle errors, data loss and possibly data corruption.
But to give the questioner his due, you could add a global handler and check the TargetSite to see which method borked. Then you could at least know what line it borked on. The next part would be to try and figure out how to set the "next statement" the same way the debugger does it. Hopefully your stack won't have unwound at this point or you can re-create it, but it's certainly worth a shot. However, given this approach the code would have to run in Debug mode every time so that you would have your debug symbols included.
A: public delegate void VoidDelegate();
public static class Utils
{
public static void Try(VoidDelegate v) {
try {
v();
}
catch {}
}
}
Utils.Try( () => WidgetMaker.SetAlignment(57) );
Utils.Try( () => contactForm["Title"] = txtTitle.Text );
Utils.Try( () => Casserole.Season(true, false) );
Utils.Try( () => ((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true );
A: As someone mentioned, VB allows this. How about doing it the same way in C#? Enter trusty reflector:
This:
Sub Main()
On Error Resume Next
Dim i As Integer = 0
Dim y As Integer = CInt(5 / i)
End Sub
Translates into this:
public static void Main()
{
// This item is obfuscated and can not be translated.
int VB$ResumeTarget;
try
{
int VB$CurrentStatement;
Label_0001:
ProjectData.ClearProjectError();
int VB$ActiveHandler = -2;
Label_0009:
VB$CurrentStatement = 2;
int i = 0;
Label_000E:
VB$CurrentStatement = 3;
int y = (int) Math.Round((double) (5.0 / ((double) i)));
goto Label_008F;
Label_0029:
VB$ResumeTarget = 0;
switch ((VB$ResumeTarget + 1))
{
case 1:
goto Label_0001;
case 2:
goto Label_0009;
case 3:
goto Label_000E;
case 4:
goto Label_008F;
default:
goto Label_0084;
}
Label_0049:
VB$ResumeTarget = VB$CurrentStatement;
switch (((VB$ActiveHandler > -2) ? VB$ActiveHandler : 1))
{
case 0:
goto Label_0084;
case 1:
goto Label_0029;
}
}
catch (object obj1) when (?)
{
ProjectData.SetProjectError((Exception) obj1);
goto Label_0049;
}
Label_0084:
throw ProjectData.CreateProjectError(-2146828237);
Label_008F:
if (VB$ResumeTarget != 0)
{
ProjectData.ClearProjectError();
}
}
A: Rewrite the code. Try to find sets of statements which logically depend on each other, so that if one fails then the next ones make no sense, and hive them off into their own functions and put try-catches round them, if you want to ignore the result of that and continue.
A: Refactor into individual, well-named methods:
AdjustFormWidgets();
SetContactTitle(txtTitle.Text);
SeasonCasserole();
Each of those is protected appropriately.
A: This may help you in identifing the pieces that have the most problems.
@ JB King
Thanks for reminding me. The Logging application block has a Instrumentation Event that can be used to trace events, you can find more info on the MS Enterprise library docs.
Using (New InstEvent)
<series of statements>
End Using
All of the steps in this using will be traced to a log file, and you can parse that out to see where the log breaks (ex is thrown) and id the high offenders.
Refactoring is really your best bet, but if you have a lot, this may help you pinpoint the worst offenders.
A: I would say do nothing.
Yup thats right, do NOTHING.
You have clearly identified two things to me:
*
*You know the architecture is borked.
*There is a ton of this crap.
I say:
*
*Do nothing.
*Add a global error handler to send you an email every time it goes boom.
*Wait until something falls over (or fails a test)
*Correct that (Refactoring as necessary within the scope of the page).
*Repeat every time a problem occurs.
You will have this cleared up in no time if it is that bad. Yeah I know it sounds sucky and you may be pulling your hair out with bugfixes to begin with, but it will allow you to fix the needy/buggy code before the (large) amount of code that may actually be working no matter how crappy it looks.
Once you start winning the war, you will have a better handle on the code (due to all your refactoring) you will have a better idea for a winning design for it..
Trying to wrap all of it in bubble wrap is probably going to take just a long to do and you will still not be any closer to fixing the problems.
A: It's pretty obvious that you'd write the code in VB.NET, which actually does have On Error Resume Next, and export it in a DLL to C#. Anything else is just being a glutton
for punishment.
A: If you can get the compiler to give you an expression tree for this code, then you could modify that expression tree by replacing each statement with a new try-catch block that wraps the original statement. This isn't as far-fetched as it sounds; for LINQ, C# acquired the ability to capture lambda expressions as expression trees that can be manipulated in user code at runtime.
This approach is not possible today with .NET 3.5 -- if for no other reason than the lack of a "try" statement in System.Linq.Expressions. However, it may very well be viable in a future version of C# once the merge of the DLR and LINQ expression trees is complete.
A: You could use goto, but it's still messy.
I've actually wanted a sort of single statement try-catch for a while. It would be helpful in certain cases, like adding logging code or something that you don't want to interrupt the main program flow if it fails.
I suspect something could be done with some of the features associated with linq, but don't really have time to look into it at the moment. If you could just find a way to wrap a statement as an anonymous function, then use another one to call that within a try-catch block it would work... but not sure if that's possible just yet.
A: Why not use the reflection in c#? You could create a class that reflects on the code and use line #s as the hint for what to put in each individual try/catch block. This has a few advantages:
*
*Its slightly less ugly as it doesn't really you require mangle your source code and you can use it only during debug modes.
*You learn something interesting about c# while implementing it.
I however would recommend against any of this, unless of course you are taking over maintance of someelses work and you need to get a handle on the exceptions so you can fix them. Might be fun to write though.
A: Fun question; very terrible.
It'd be nice if you could use a macro. But this is blasted C#, so you might solve it with some preprocessor work or some external tool to wrap your lines in individual try-catch blocks. Not sure if you meant you didn't want to manually wrap them or that you wanted to avoid try-catch entirely.
Messing around with this, I tried labeling every line and jumping back from a single catch, without much luck. However, Christopher uncovered the correct way to do this. There's some interesting additional discussion of this at Dot Net Thoughts and at Mike Stall's .NET Blog.
EDIT: Of course. The try-catch / switch-goto solution listed won't actually compile since the try labels are out-of-scope in catch. Anyone know what's missing to make something like this compile?
You could automate this with a compiler preprocess step or maybe hack up Mike Stall's Inline IL tool to inject some error-ignorance.
(Orion Adrian's answer about examining the Exception and trying to set the next instruction is interesting too.)
All in all, it seems like an interesting and instructive exercise. Of course, you'd have to decide at what point the effort to simulate ON ERROR RESUME NEXT outweighs the effort to fix the code. :-)
A: Catch the errors in the UnhandledException Event of the application. That way, unhandled execptions can even be logged as to the sender and whatever other information the developer would reasonable.
A: Unfortunately you are probably out of luck. On Error Resume Next is a legacy option that is generally heavily discouraged, and does not have an equivalent to my knowledge in C#.
I would recommend leaving the code in VB (It sounds like that was the source, given your specific request for OnError ResumeNext) and interfacing with or from a C# dll or exe that implements whatever new code you need. Then preform refactoring to cause the code to be safe, and convert this safe code to C# as you do this.
A: You could look at integrating the Enterprise Library's Exception Handling component for one idea of how to handle unhandled exceptions.
If this is for ASP.Net applications, there is a function in the Global.asax called, "Application_Error" that gets called in most cases with catastrophic failure being the other case usually.
A: Hilite each line, one at a time, 'Surround with' try/catch. That avoids the copying pasting you mentioned
A: Ignoring all the reasons you'd want to avoid doing this.......
If it were simply a need to keep # of lines down, you could try something like:
int totalMethodCount = xxx;
for(int counter = 0; counter < totalMethodCount; counter++) {
try {
if (counter == 0) WidgetMaker.SetAlignment(57);
if (counter == 1) contactForm["Title"] = txtTitle.Text;
if (counter == 2) Casserole.Season(true, false);
if (counter == 3) ((RecordKeeper)Session["CasseroleTracker"]).Seasoned = true;
} catch (Exception ex) {
// log here
}
}
However, you'd have to keep an eye on variable scope if you try to reuse any of the results of the calls.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Seam/JSF form submit firing button onclick event I have a search form with a query builder. The builder is activated by a button. Something like this
<h:form id="search_form">
<h:outputLabel for="expression" value="Expression"/>
<h:inputText id="expression" required="true" value="#{searcher.expression}"/>
<button onclick="openBuilder(); return false;">Open Builder</button>
<h:commandButton value="Search" action="#{searcher.search}"/>
</h:form>
The result is HTML that has both a <button/> and an <input type="submit"/> in the form. If the user enters a string into the expression field and hits the enter key rather than clicking the submit button, the query builder is displayed when the expected behavior is that the search be submitted. What gives?
A: A button in an HTML form is assumed to be used to submit the form. Change button to input type="button" and that should fix it.
Alternatively, add type="button" to the button element.
A: as first, give an ID to Search button.
Then,on textbox, you could intercept client event onkeydown, with a (javascript) function like this:
function KeyDownHandler(event)
{
// process only the Enter key
if (event.keyCode == 13)
{
// cancel the default submit
event.returnValue=false;
event.cancel = true;
// submit the form by programmatically clicking the specified button
document.getElementById('searchButtonId').click();
}
}
I hoper i help you.
A: if there is a single input field within the form, many browsers submit the forms automatically when the enter key is hit.
Try
*
*Add another input field. Hide it by styling it so it isn't visible. (e.g., <input type="text" name="bogusField" style="display: none;" />
*Block the enter key form submit behavior within a JavaScript event handler (e.g., here or here). Even better, use a GUI toolkit that may help with this (e.g., GWT)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to update a Tix.ComboBox's text? I have a Tix.ComboBox with an editable text field. How do I force the variable holding the value for the text to update?
Let me give a more concrete explanation. I have a combo box and a button. When I click the button, it pops up a message box with the value of the combo box. Let's say the combo box text field currently has the value "thing1". If I type "new" into the box and then click on the button with my mouse, it will pops up the message "thing1". If I type "new" in the box and then tab focus away from the combo box and then click the button the pop up message says "new".
Ho do I force the combo box to update it's value to new without requiring that I tab away from the combo box?
I have included sample code.
import Tix
import tkMessageBox
class App(object):
def __init__(self, window):
window.winfo_toplevel().wm_title("test")
self.window = window
self.combo = Tix.ComboBox(window)
self.combo.insert(Tix.END, 'thing1')
self.combo.insert(Tix.END, 'thing2')
self.combo.entry['state'] = "normal"
self.combo['editable'] = True
self.combo.pack()
button = Tix.Button(window)
button['text'] = "Go"
button['command'] = self.go
button.pack()
def go(self):
tkMessageBox.showinfo('info', self.combo['value'])
if __name__ == '__main__':
root = Tix.Tk()
App(root)
root.mainloop()
A: woo!
solved it on my own.
Use
self.combo['selection']
instead of
self.combo['value']
A: NOTE: copy of Moe's answer that can be selected as chosen answer
woo!
solved it on my own.
Use
self.combo['selection']
instead of
self.combo['value']
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to check if a php script is still running I have a PHP script that listens on a queue. Theoretically, it's never supposed to die. Is there something to check if it's still running? Something like Ruby's God ( http://god.rubyforge.org/ ) for PHP?
God is language agnostic but it would be nice to have a solution that works on windows as well.
A: Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php 2>&1 | mail -s "Daemon stopped" you@example.org
Edit:
Technically, this invokes the mailer right away, but only completes the command when the php script ends. Doing this captures the output of the php-script and includes in the mail body, which can be useful for debugging what caused the script to halt.
A: Simple bash script
#!/bin/bash
while [true]; do
if ! pidof -x script.php;
then
php script.php &
fi
done
A: Not for windows, but...
I've got a couple of long-running PHP scripts, that have a shell script wrapping it. You can optionally return a value from the script that will be checked in the shell-script to exit, restart immediately, or sleep for a few seconds -and then restart.
Here's a simple one that just keeps running the PHP script till it's manually stopped.
#!/bin/bash
clear
date
php -f cli-SCRIPT.php
echo "wait a little while ..."; sleep 10
exec $0
The "exec $0" restarts the script, without creating a sub-process that will have to unravel later (and take up resources in the meantime). This bash script wraps a mail-sender, so it's not a problem if it exits and pauses for a moment.
A: Here is what I did to combat a similar issue. This helps in the event anyone else has a parameterized php script that you want cron to execute frequently, but only want one execution to run at any time. Add this to the top of your php script, or create a common method.
$runningScripts = shell_exec('ps -ef |grep '.strtolower($parameter).' |grep '.dirname(__FILE__).' |grep '.basename(__FILE__).' |grep -v grep |wc -l');
if($runningScripts > 1){
die();
}
A: I had the same issue - wanting to check if a script is running. So I came up with this and I run it as a cron job. It grabs the running processes as an array and cycles though each line and checks for the file name. Seems to work fine. Replace #user# with your script user.
exec("ps -U #user# -u #user# u", $output, $result);
foreach ($output AS $line) if(strpos($line, "test.php")) echo "found";
A: In linux run ps as follows:
ps -C php -f
You could then do in a php script:
$output = shell_exec('ps -C php -f');
if (strpos($output, "php my_script.php")===false) {
shell_exec('php my_script.php > /dev/null 2>&1 &');
}
The above code lists all php processes running in full, then checks to see if "my_script.php" is in the list of running processes, if not it runs the process and does not wait for the process to terminate to carry on doing what it was doing.
A: You can write in your crontab something like this:
0 3 * * * /usr/bin/php -f /home/test/test.php my_special_cron
Your test.php file should look like this:
<?php
php_sapi_name() == 'cli' || exit;
if($argv[1]) {
substr_count(shell_exec('ps -ax'), $argv[1]) < 3 || exit;
}
// your code here
That way you will have only one active instace of the cron job with my-special-cron as process key. So you can add more jobs within the same php file.
test.php system_send_emails sendEmails
test.php system_create_orders orderExport
A: Inspired from Justin Levene's answer and improved it as ps -C doesn't work in Mac, which I need in my case. So you can use this in a php script (maybe just before you need daemon alive), tested in both Mac OS X 10.11.4 & Ubuntu 14.04:
$daemonPath = "FULL_PATH_TO_DAEMON";
$runningPhpProcessesOfDaemon = (int) shell_exec("ps aux | grep -c '[p]hp ".$daemonPath."'");
if ($runningPhpProcessesOfDaemon === 0) {
shell_exec('php ' . $daemonPath . ' > /dev/null 2>&1 &');
}
Small but useful detail: Why grep -c '[p]hp ...' instead of grep -c 'php ...'?
Because while counting processes grep -c 'php ...' will be counted as a process that fits in our pattern. So using a regex for first letter of php makes our command different from pattern we search.
A: One possible solution is to have it listen on a port using the socket functions. You can check that the socket is still listening with a simple script. Even a monitoring service like pingdom could monitor its status. If it dies, the socket is no longer listening.
Plenty of solutions.. Good luck.
A: If you have your hands on the script, you can just ask him to set a time value every X times in db, and then let a cron job check if that value is up to date.
A: troelskn wrote:
Just append a second command after the script. When/if it stops, the second command is invoked. Eg.:
php daemon.php | mail -s "Daemon stopped" you@example.org
This will call mail each time a line is printed in daemon.php (which should be never, but still.)
Instead, use the double ampersand operator to separate the commands, i.e.
php daemon.php & mail -s "Daemon stopped" you@example.org
A: If you're having trouble checking for the PHP script directly, you can make a trivial wrapper and check for that. I'm not sufficiently familiar with Windows scripting to put how it's done here, but in Bash, it'd look like...
wrapper_for_test_php.sh
#!/bin/bash
php test.php
Then you'd just check for the wrapper like you'd check for any other bash script: pidof -x wrapper_for_test_php.sh
A: I have used cmder for windows and based on this script I came up with this one that I managed to deploy on linux later.
#!/bin/bash
clear
date
while true
do
php -f processEmails.php
echo "wait a little while for 5 secobds...";
sleep 5
done
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Is it possible to delete subdomain cookies? If there is a cookie set for a subdomain, metric.foo.com, is there a way for me to delete the metric.foo.com cookie on a request to www.foo.com? The browser (at least Firefox) seems to ignore a Set-Cookie with a domain of metric.foo.com.
A: I had the same problem with subdomains. For some reason getting the cookie first from the request didn't work. Instead I ended up just creating a new cookie with the same cookie name, and expiry date in the past. That worked perfectly:
void DeleteSubdomainCookie(HttpResponse response, string name)
{
HttpCookie cookie = new HttpCookie(name);
cookie.Expires = DateTime.Now.AddMonths(-1);
cookie.Domain = ".yourdomain.com";
response.Cookies.Add(cookie);
}
A: Cookies are only readable by the domain that created them, so if the cookie was created at metric.foo.com, it will have to be deleted under the same domain as it was created. This includes sub-domains.
If you are required to delete a cookie from metric.foo.com, but are currently running a page at www.foo.com, you will not be able to.
In order to do this, you need to load the page from metric.foo.com, or create the cookie under foo.com so it can be accessable under any subdomain. OR use this:
Response.cookies("mycookie").domain = ".foo.com"
...while creating it, AND before you delete it.
..untested - should work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: What's the best way to use SVN to version control a PHP site? I've always just FTPed files down from sites, edited them and put them back up when creating sites, but feel it's worth learning to do things properly.
I've just commited everything to a SVN repo, and have tried sshing into the server and checking out a tagged build, as well as updating that build using switch.
All good, but it's a lot lot slower than my current process.
What's the best way to set something like this up? Most of my time is just bug fixes or small changes rather than large rewrites, so I'm frequently updating things.
A: You don't necessarily need to use SVN to deploy the files to the server. Keep using FTP for that and just use SVN for revision history.
A: You should look at installing rsync to upload changes to your server.
Rsync is great because it compares your local copy of the repo to the copy that's currently on the server and then only sends files that have changed.
This saves you having to remember every file that you changed and selecting them manually to FTP, or having to upload your whole local copy to the server again (and leaving FTP to do the comparisons).
Rsync also lets you exclude files/folder (i.e. .svn/ folders) when syncing between your servers.
A: I'd recommend you keep using Subversion to track all changes, even bug fixes. When you wish to deploy to your production server, you should use SSH and call svn update. This process can be automated using Capistrano, meaning that you can sit at your local box and call cap deploy -- Capistrano will SSH into your server and perform the Subversion update. Saves a lot of tedious manual labor.
A: For quick updates I just run svn update from the server.
Sometimes for really really quick updates I edit the files using vim and commit them from the server.
It's not very proper, but quick and quite reliable.
A: If you want to do this properly, you should definitely look into setting up a local SVN repository. I would also highly recommend setting up a continuous integration (CI) server such as cruise control, which would automatically run any tests against your PHP code when ever you check in to svn. Your CI server could also be used to publish your files via FTP to your host at the click of a button, once it has passed the tests.
Although this sounds like a lot of work, it really isn't and the benefits of a smooth deployment process will more than pay for itself in the long run.
A: For my projects, I usually have a repo. On my laptop is a working copy, and the live website is a working copy. I make my changes on the local copy, using my local webserver. When everything is tested and ready to go, I commit the changes, then I ssh into the remote server and svn update.
I also keep a folder in this repository which contains sql files of any changes I've made to the database structure, labelled according to their revision number. For instance, when I commit Revision 74 and it has a couple extra columns in one of the tables, included in the commit will be dbupdates/rev74.sql. That way, after I do my svn update, all I just have to run my sql file (mysql db_name -p -u username < dbupdates/rev74.sql) and I'm good to go.
A: If you want to get real funky with it, you could use a build script to get the current version from SVN, then compile your PHP code, then on a successful build, automatically push the changes to your server.
This will help in debugging and may make your code run faster. Also, getting into the build habit has really improved my coding over just pushing the PHP straight to the server and debugging via Firefox.
A: The benefits of source control reveal themselves as the complexity of the project and number of developers increase. If you are working directly on a remote server, and are only making quick patches most of the time, source control might not be worth the effort to you.
Preferably, you should be working from a local working copy of the repository (meaning you should also set up a local server). Working against a remote server using SVN as the only means to update it would slow you down quite considerably.
Having said that, working with SVN (or any other source control) will yield many benefits in the long run - you have a complete history of changes, you can always be sure the server is up-to-date (if you ran update) and if you add more developers to the project you can avoid costly source overwrites from each other.
A: What I do at work, is use FTP to upload changes to a test server. Then when I am finished with the section of the site that I was working on, I commit the changes and update both. Sometimes, if I am working on something and I change a lot of files in different directories, I commit it and update the test server. But I don't update the production server. But I am the only programmer here, I wouldn't recommend committing possibally buggy code if there is more than one programmer.
A: I use ZendStudio for Eclipse (currently version 6.1). And I use SVN to keep my source codes available. Initially I thought the process was somewhat slow due to commit process (and entering commit comment) and wait until it stops.
However after learning that Ctrl+Alt+C to Commit and check 'Always run in Background', the process doesn't slow at all.
Plus, I do run everything locally, then only SSH after a while.
A: I did a post-commit hook to automatically update my web. It´s fast but you can make mistakes.
A: IF on a *nix server AND you have the appropriate SSH access AND you have space to keep multiple copies of the website, THEN the single most useful versioning technique I have found is to use a symbolic link to point to the "current" version of the website. (You can still use SVN to version source code -- this is a way to easily/instantly switch between versions of the website on the server.)
*
*Set up the webserver to point to /whatever.com as the root of the website.
*Have a folder like /website/r1v00 to which you FTP the website files, then create a symlink called "whatever.com" that points to /website/r1v00
*When you have an updated version of the website, create another folder called /website/r1v001, FTP all the files for the updated site, then change the symlink for "whatever.com" to now point to /website/r1v01. If there are any problems with the new site, you can back it out instantly by simply pointing the "whatever.com" symlink back to /website/r1v00
Of course, you can/should set up scripts to automate the creation and switching of the symlink. In my case, I have an "admin" page written in PHP that lists all the available versions, and allows me to switch to any of them. This technique has saved my bacon several times...!
Obviously this does not address any issues with versioning database schemas or database content.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Manually inserting data in a table(s) with primary key populated with sequence I have a number of tables that use the trigger/sequence column to simulate auto_increment on their primary keys which has worked great for some time.
In order to speed the time necessary to perform regression testing against software that uses the db, I create control files using some sample data, and added running of these to the build process.
This change is causing most of the tests to crash though as the testing process installs the schema from scratch, and the sequences are returning values that already exist in the tables. Is there any way to programtically say "Update sequences to max value in column" or do I need to write out a whole script by hand that updates all these sequences, or can I/should I change the trigger that substitutes the null value for the sequence to some how check this (though I think this might cause the mutating table problem)?
A: You can generate a script to create the sequences with the start values you need (based on their existing values)....
SELECT 'CREATE SEQUENCE '||sequence_name||' START WITH '||last_number||';'
FROM ALL_SEQUENCES
WHERE OWNER = your_schema
(If I understand the question correctly)
A: Here's a simple way to update a sequence value - in this case setting the sequence to 1000 if it is currently 50:
alter sequence MYSEQUENCE increment by 950 nocache;
select MYSEQUENCE_S.nextval from dual;
alter sequence MYSEQUENCE increment by 1;
Kudos to the creators of PL/SQL Developer for including this technique in their tool.
A: As part of your schema rebuild, why not drop and recreate the sequence?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I get a decimal value when using the division operator in Python? For example, the standard division symbol '/' rounds to zero:
>>> 4 / 100
0
However, I want it to return 0.04. What do I use?
A: Make one or both of the terms a floating point number, like so:
4.0/100.0
Alternatively, turn on the feature that will be default in Python 3.0, 'true division', that does what you want. At the top of your module or script, do:
from __future__ import division
A: You might want to look at Python's decimal package, also. This will provide nice decimal results.
>>> decimal.Decimal('4')/100
Decimal("0.04")
A: You need to tell Python to use floating point values, not integers. You can do that simply by using a decimal point yourself in the inputs:
>>> 4/100.0
0.040000000000000001
A: Other answers suggest how to get a floating-point value. While this wlil be close to what you want, it won't be exact:
>>> 0.4/100.
0.0040000000000000001
If you actually want a decimal value, do this:
>>> import decimal
>>> decimal.Decimal('4') / decimal.Decimal('100')
Decimal("0.04")
That will give you an object that properly knows that 4 / 100 in base 10 is "0.04". Floating-point numbers are actually in base 2, i.e. binary, not decimal.
A: There are three options:
>>> 4 / float(100)
0.04
>>> 4 / 100.0
0.04
which is the same behavior as the C, C++, Java etc, or
>>> from __future__ import division
>>> 4 / 100
0.04
You can also activate this behavior by passing the argument -Qnew to the Python interpreter:
$ python -Qnew
>>> 4 / 100
0.04
The second option will be the default in Python 3.0. If you want to have the old integer division, you have to use the // operator.
Edit: added section about -Qnew, thanks to ΤΖΩΤΖΙΟΥ!
A: A simple route 4 / 100.0
or
4.0 / 100
A: You cant get a decimal value by dividing one integer with another, you'll allways get an integer that way (result truncated to integer). You need at least one value to be a decimal number.
A: Here we have two possible cases given below
from __future__ import division
print(4/100)
print(4//100)
A: Try 4.0/100
A: Add the following function in your code with its callback.
# Starting of the function
def divide(number_one, number_two, decimal_place = 4):
quotient = number_one/number_two
remainder = number_one % number_two
if remainder != 0:
quotient_str = str(quotient)
for loop in range(0, decimal_place):
if loop == 0:
quotient_str += "."
surplus_quotient = (remainder * 10) / number_two
quotient_str += str(surplus_quotient)
remainder = (remainder * 10) % number_two
if remainder == 0:
break
return float(quotient_str)
else:
return quotient
#Ending of the function
# Calling back the above function
# Structure : divide(<divident>, <divisor>, <decimal place(optional)>)
divide(1, 7, 10) # Output : 0.1428571428
# OR
divide(1, 7) # Output : 0.1428
This function works on the basis of "Euclid Division Algorithm". This function is very useful if you don't want to import any external header files in your project.
Syntex : divide([divident], [divisor], [decimal place(optional))
Code : divide(1, 7, 10) OR divide(1, 7)
Comment below for any queries.
A: You could also try adding a ".0" at the end of the number.
4.0/100.0
A: Import division from future library like this:
from__future__ import division
A: It's only dropping the fractional part after decimal.
Have you tried : 4.0 / 100
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72"
}
|
Q: What is PostgreSQL explain telling me exactly? MySQL's explain output is pretty straightforward. PostgreSQL's is a little more complicated. I haven't been able to find a good resource that explains it either.
Can you describe what exactly explain is saying or at least point me in the direction of a good resource?
A: PostgreSQL's official documentation provides an interesting, thorough explanation on how to understand explain's output.
A: Explaining_EXPLAIN.pdf could help too.
A: It executes from most indented to least indented, and I believe from the bottom of the plan to the top. (So if there are two indented sections, the one farther down the page executes first, then when they meet the other executes, then the rule joining them executes.)
The idea is that at each step there are 1 or 2 datasets that arrive and get processed by some rule. If just one dataset, that operation is done to that data set. (For instance scan an index to figure out what rows you want, filter a dataset, or sort it.) If two, the two datasets are the two things that are indented further, and they are joined by the rule you see. The meaning of most of the rules can be reasonably easily guessed (particularly if you have read a bunch of explain plans before), however you can try to verify individual items either by looking in the documentation or (easier) by just throwing the phrase into Google along with a few keywords like EXPLAIN.
This is obviously not a full explanation, but it provides enough context that you can usually figure out whatever you want. For example consider this plan from an actual database:
explain analyze
select a.attributeid, a.attributevalue, b.productid
from orderitemattribute a, orderitem b
where a.orderid = b.orderid
and a.attributeid = 'display-album'
and b.productid = 'ModernBook';
------------------------------------------------------------------------------------------------------------------------------------------------------------
Merge Join (cost=125379.14..125775.12 rows=3311 width=29) (actual time=841.478..841.478 rows=0 loops=1)
Merge Cond: (a.orderid = b.orderid)
-> Sort (cost=109737.32..109881.89 rows=57828 width=23) (actual time=736.163..774.475 rows=16815 loops=1)
Sort Key: a.orderid
Sort Method: quicksort Memory: 1695kB
-> Bitmap Heap Scan on orderitemattribute a (cost=1286.88..105163.27 rows=57828 width=23) (actual time=41.536..612.731 rows=16815 loops=1)
Recheck Cond: ((attributeid)::text = 'display-album'::text)
-> Bitmap Index Scan on (cost=0.00..1272.43 rows=57828 width=0) (actual time=25.033..25.033 rows=16815 loops=1)
Index Cond: ((attributeid)::text = 'display-album'::text)
-> Sort (cost=15641.81..15678.73 rows=14769 width=14) (actual time=14.471..16.898 rows=1109 loops=1)
Sort Key: b.orderid
Sort Method: quicksort Memory: 76kB
-> Bitmap Heap Scan on orderitem b (cost=310.96..14619.03 rows=14769 width=14) (actual time=1.865..8.480 rows=1114 loops=1)
Recheck Cond: ((productid)::text = 'ModernBook'::text)
-> Bitmap Index Scan on id_orderitem_productid (cost=0.00..307.27 rows=14769 width=0) (actual time=1.431..1.431 rows=1114 loops=1)
Index Cond: ((productid)::text = 'ModernBook'::text)
Total runtime: 842.134 ms
(17 rows)
Try reading it for yourself and see if it makes sense.
What I read is that the database first scans the id_orderitem_productid index, using that to find the rows it wants from orderitem, then sorts that dataset using a quicksort (the sort used will change if data doesn't fit in RAM), then sets that aside.
Next, it scans orditematt_attributeid_idx to find the rows it wants from orderitemattribute and then sorts that dataset using a quicksort.
It then takes the two datasets and merges them. (A merge join is a sort of "zipping" operation where it walks the two sorted datasets in parallel, emitting the joined row when they match.)
As I said, you work through the plan inner part to outer part, bottom to top.
A: There is an online helper tool available too, Depesz, which will highlight where the expensive parts of the analysis results are.
also has one, here's the same results, which to me make it clearer where the problem is.
A: PgAdmin will show you a graphical representation of the explain plan. Switching back and forth between the two can really help you understand what the text representation means. However, if you just want to know what it is going todo, you may be able to just always use the GUI.
A: The part I always found confusing is the startup cost vs total cost. I Google this every time I forget about it, which brings me back to here, which doesn't explain the difference, which is why I'm writing this answer. This is what I have gleaned from the Postgres EXPLAIN documentation, explained as I understand it.
Here's an example from an application that manages a forum:
EXPLAIN SELECT * FROM post LIMIT 50;
Limit (cost=0.00..3.39 rows=50 width=422)
-> Seq Scan on post (cost=0.00..15629.12 rows=230412 width=422)
Here's the graphical explanation from PgAdmin:
(When you're using PgAdmin, you can point your mouse at a component to read the cost details.)
The cost is represented as a tuple, e.g. the cost of the LIMIT is cost=0.00..3.39 and the cost of sequentially scanning post is cost=0.00..15629.12. The first number in the tuple is the startup cost and the second number is the total cost. Because I used EXPLAIN and not EXPLAIN ANALYZE, these costs are estimates, not actual measures.
*
*Startup cost is a tricky concept. It doesn't just represent the amount of time before that component starts. It represents the amount of time between when the component starts executing (reading in data) and when the component outputs its first row.
*Total cost is the entire execution time of the component, from when it begins reading in data to when it finishes writing its output.
As a complication, each "parent" node's costs includes the cost's of its child nodes. In the text representation, the tree is represented by indentation, e.g. LIMIT is a parent node and Seq Scan is its child. In the PgAdmin representation, the arrows point from child to parent — the direction of the flow of data — which might be counterintuitive if you are familiar with graph theory.
The documentation says that costs are inclusive of all child nodes, but notice that the total cost of the parent 3.39 is much smaller than the total cost of it's child 15629.12. Total cost is not inclusive because a component like LIMIT doesn't need to process its entire input. See the EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000 LIMIT 2; example in Postgres EXPLAIN documentation.
In the example above, startup time is zero for both components, because neither component needs to do any processing before it starts writing rows: a sequential scan reads the first row of the table and emits it. The LIMIT reads its first row and then emits it.
When would a component need to do a lot of processing before it can start to output any rows? There are a lot of possible reasons, but let's look at one clear example. Here's the same query from before but now containing an ORDER BY clause:
EXPLAIN SELECT * FROM post ORDER BY body LIMIT 50;
Limit (cost=23283.24..23283.37 rows=50 width=422)
-> Sort (cost=23283.24..23859.27 rows=230412 width=422)
Sort Key: body
-> Seq Scan on post (cost=0.00..15629.12 rows=230412 width=422)
And graphically:
Once again, the sequential scan on post has no startup cost: it starts outputting rows immediately. But the sort has a significant startup cost 23283.24 because it has to sort the entire table before it can output even a single row. The total cost of the sort 23859.27 is only slightly higher than the startup cost, reflecting the fact that once the entire dataset has been sorted, the sorted data can be emitted very quickly.
Notice that the startup time of the LIMIT 23283.24 is exactly equal to the startup time of the sort. This is not because LIMIT itself has a high startup time. It actually has zero startup time by itself, but EXPLAIN rolls up all of the child costs for each parent, so the LIMIT startup time includes the sum startup times of its children.
This rollup of costs can make it difficult to understand the execution cost of each individual component. For example, our LIMIT has zero startup time, but that's not obvious at first glance. For this reason, several other people linked to explain.depesz.com, a tool created by Hubert Lubaczewski (a.k.a. depesz) that helps understand EXPLAIN by — among other things — subtracting out child costs from parent costs. He mentions some other complexities in a short blog post about his tool.
A: If you install pgadmin, there's an Explain button that as well as giving the text output draws diagrams of what's happening, showing the filters, sorts and sub-set merges that I find really useful to see what's happening.
A: dalibo/pev2 is a visualizer tool which is very helpful.
Its available here - https://explain.dalibo.com/
Postgres Explain Visualizer 2 (PEV2) looks similar to pev. However pev is not actively maintained.
This project is a rewrite of the excellent Postgres Explain Visualizer
(pev). Kudos go to Alex Tatiyants.
The pev project was initialy written in early 2016 but seems to be
abandoned since then. There was no activity at all for more than 3
years and counting though there are several issues open and relevant
pull requests pending.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "138"
}
|
Q: Use of 'const' for function parameters How far do you go with const? Do you just make functions const when necessary or do you go the whole hog and use it everywhere? For example, imagine a simple mutator that takes a single boolean parameter:
void SetValue(const bool b) { my_val_ = b; }
Is that const actually useful? Personally I opt to use it extensively, including parameters, but in this case I wonder if it's worthwhile?
I was also surprised to learn that you can omit const from parameters in a function declaration but can include it in the function definition, e.g.:
.h file
void func(int n, long l);
.cpp file
void func(const int n, const long l)
Is there a reason for this? It seems a little unusual to me.
A: Extra Superfluous const are bad from an API stand-point:
Putting extra superfluous const's in your code for intrinsic type parameters passed by value clutters your API while making no meaningful promise to the caller or API user (it only hampers the implementation).
Too many 'const' in an API when not needed is like "crying wolf", eventually people will start ignoring 'const' because it's all over the place and means nothing most of the time.
The "reductio ad absurdum" argument to extra consts in API are good for these first two points would be is if more const parameters are good, then every argument that can have a const on it, SHOULD have a const on it. In fact, if it were truly that good, you'd want const to be the default for parameters and have a keyword like "mutable" only when you want to change the parameter.
So lets try putting in const whereever we can:
void mungerum(char * buffer, const char * mask, int count);
void mungerum(char * const buffer, const char * const mask, const int count);
Consider the line of code above. Not only is the declaration more cluttered and longer and harder to read but three of the four 'const' keywords can be safely ignored by the API user. However, the extra use of 'const' has made the second line potentially DANGEROUS!
Why?
A quick misread of the first parameter char * const buffer might make you think that it will not modify the memory in data buffer that is passed in -- however, this is not true! Superfluous 'const' can lead to dangerous and incorrect assumptions about your API when scanned or misread quickly.
Superfluous const are bad from a Code Implementation stand-point as well:
#if FLEXIBLE_IMPLEMENTATION
#define SUPERFLUOUS_CONST
#else
#define SUPERFLUOUS_CONST const
#endif
void bytecopy(char * SUPERFLUOUS_CONST dest,
const char *source, SUPERFLUOUS_CONST int count);
If FLEXIBLE_IMPLEMENTATION is not true, then the API is “promising” not to implement the function the first way below.
void bytecopy(char * SUPERFLUOUS_CONST dest,
const char *source, SUPERFLUOUS_CONST int count)
{
// Will break if !FLEXIBLE_IMPLEMENTATION
while(count--)
{
*dest++=*source++;
}
}
void bytecopy(char * SUPERFLUOUS_CONST dest,
const char *source, SUPERFLUOUS_CONST int count)
{
for(int i=0;i<count;i++)
{
dest[i]=source[i];
}
}
That’s a very silly promise to make. Why should you make a promise that gives no benefit at all to your caller and only limits your implementation?
Both of these are perfectly valid implementations of the same function though so all you’ve done is tied one hand behind your back unnecessarily.
Furthermore, it’s a very shallow promise that is easily (and legally circumvented).
inline void bytecopyWrapped(char * dest,
const char *source, int count)
{
while(count--)
{
*dest++=*source++;
}
}
void bytecopy(char * SUPERFLUOUS_CONST dest,
const char *source,SUPERFLUOUS_CONST int count)
{
bytecopyWrapped(dest, source, count);
}
Look, I implemented it that way anyhow even though I promised not to – just using a wrapper function. It’s like when the bad guy promises not to kill someone in a movie and orders his henchman to kill them instead.
Those superfluous const’s are worth no more than a promise from a movie bad-guy.
But the ability to lie gets even worse:
I have been enlightened that you can mismatch const in header (declaration) and code (definition) by using spurious const. The const-happy advocates claim this is a good thing since it lets you put const only in the definition.
// Example of const only in definition, not declaration
struct foo { void test(int *pi); };
void foo::test(int * const pi) { }
However, the converse is true... you can put a spurious const only in the declaration and ignore it in the definition. This only makes superfluous const in an API more of a terrible thing and a horrible lie - see this example:
struct foo
{
void test(int * const pi);
};
void foo::test(int *pi) // Look, the const in the definition is so superfluous I can ignore it here
{
pi++; // I promised in my definition I wouldn't modify this
}
All the superfluous const actually does is make the implementer's code less readable by forcing him to use another local copy or a wrapper function when he wants to change the variable or pass the variable by non-const reference.
Look at this example. Which is more readable ? Is it obvious that the only reason for the extra variable in the second function is because some API designer threw in a superfluous const ?
struct llist
{
llist * next;
};
void walkllist(llist *plist)
{
llist *pnext;
while(plist)
{
pnext=plist->next;
walk(plist);
plist=pnext; // This line wouldn't compile if plist was const
}
}
void walkllist(llist * SUPERFLUOUS_CONST plist)
{
llist * pnotconst=plist;
llist *pnext;
while(pnotconst)
{
pnext=pnotconst->next;
walk(pnotconst);
pnotconst=pnext;
}
}
Hopefully we've learned something here. Superfluous const is an API-cluttering eyesore, an annoying nag, a shallow and meaningless promise, an unnecessary hindrance, and occasionally leads to very dangerous mistakes.
A: The following two lines are functionally equivalent:
int foo (int a);
int foo (const int a);
Obviously you won't be able to modify a in the body of foo if it's defined the second way, but there's no difference from the outside.
Where const really comes in handy is with reference or pointer parameters:
int foo (const BigStruct &a);
int foo (const BigStruct *a);
What this says is that foo can take a large parameter, perhaps a data structure that's gigabytes in size, without copying it. Also, it says to the caller, "Foo won't* change the contents of that parameter." Passing a const reference also allows the compiler to make certain performance decisions.
*: Unless it casts away the const-ness, but that's another post.
A: I say const your value parameters.
Consider this buggy function:
bool isZero(int number)
{
if (number = 0) // whoops, should be number == 0
return true;
else
return false;
}
If the number parameter was const, the compiler would stop and warn us of the bug.
A: If you use the ->* or .* operators, it's a must.
It prevents you from writing something like
void foo(Bar *p) { if (++p->*member > 0) { ... } }
which I almost did right now, and which probably doesn't do what you intend.
What I intended to say was
void foo(Bar *p) { if (++(p->*member) > 0) { ... } }
and if I had put a const in between Bar * and p, the compiler would have told me that.
A:
const is pointless when the argument is passed by value since you will
not be modifying the caller's object.
Wrong.
It's about self-documenting your code and your assumptions.
If your code has many people working on it and your functions are non-trivial then you should mark const any and everything that you can. When writing industrial-strength code, you should always assume that your coworkers are psychopaths trying to get you any way they can (especially since it's often yourself in the future).
Besides, as somebody mentioned earlier, it might help the compiler optimize things a bit (though it's a long shot).
A: Ah, a tough one. On one side, a declaration is a contract and it really does not make sense to pass a const argument by value. On the other hand, if you look at the function implementation, you give the compiler more chances to optimize if you declare an argument constant.
A: const is pointless when the argument is passed by value since you will not be modifying the caller's object.
const should be preferred when passing by reference, unless the purpose of the function is to modify the passed value.
Finally, a function which does not modify current object (this) can, and probably should be declared const. An example is below:
int SomeClass::GetValue() const {return m_internalValue;}
This is a promise to not modify the object to which this call is applied. In other words, you can call:
const SomeClass* pSomeClass;
pSomeClass->GetValue();
If the function was not const, this would result in a compiler warning.
A: Marking value parameters 'const' is definitely a subjective thing.
However I actually prefer to mark value parameters const, just like in your example.
void func(const int n, const long l) { /* ... */ }
The value to me is in clearly indicating that the function parameter values are never changed by the function. They will have the same value at the beginning as at the end. For me, it is part of keeping to a very functional programming sort of style.
For a short function, it's arguably a waste of time/space to have the 'const' there, since it's usually pretty obvious that the arguments aren't modified by the function.
However for a larger function, its a form of implementation documentation, and it is enforced by the compiler.
I can be sure if I make some computation with 'n' and 'l', I can refactor/move that computation without fear of getting a different result because I missed a place where one or both is changed.
Since it is an implementation detail, you don't need to declare the value parameters const in the header, just like you don't need to declare the function parameters with the same names as the implementation uses.
A: I tend to use const wherever possible. (Or other appropriate keyword for the target language.) I do this purely because it allows the compiler to make extra optimizations that it would not be able to make otherwise. Since I have no idea what these optimizations may be, I always do it, even where it seems silly.
For all I know, the compiler might very well see a const value parameter, and say, "Hey, this function isn't modifying it anyway, so I can pass by reference and save some clock cycles." I don't think it ever would do such a thing, since it changes the function signature, but it makes the point. Maybe it does some different stack manipulation or something... The point is, I don't know, but I do know trying to be smarter than the compiler only leads to me being shamed.
C++ has some extra baggage, with the idea of const-correctness, so it becomes even more important.
A: May be this wont be a valid argument. but if we increment the value of a const variable inside a function compiler will give us an error:
"error: increment of read-only parameter". so that means we can use const key word as a way to prevent accidentally modifying our variables inside functions(which we are not supposed to/read-only). so if we accidentally did it at the compile time compiler will let us know that. this is specially important if you are not the only one who is working on this project.
A: const should have been the default in C++.
Like this :
int i = 5 ; // i is a constant
var int i = 5 ; // i is a real variable
A: When I coded C++ for a living I consted everything I possibly could. Using const is a great way to help the compiler help you. For instance, const-ing your method return values can save you from typos such as:
foo() = 42
when you meant:
foo() == 42
If foo() is defined to return a non-const reference:
int& foo() { /* ... */ }
The compiler will happily let you assign a value to the anonymous temporary returned by the function call. Making it const:
const int& foo() { /* ... */ }
Eliminates this possibility.
A: The reason is that const for the parameter only applies locally within the function, since it is working on a copy of the data. This means the function signature is really the same anyways. It's probably bad style to do this a lot though.
I personally tend to not use const except for reference and pointer parameters. For copied objects it doesn't really matter, although it can be safer as it signals intent within the function. It's really a judgement call. I do tend to use const_iterator though when looping on something and I don't intend on modifying it, so I guess to each his own, as long as const correctness for reference types is rigorously maintained.
A: In the case you mention, it doesn't affect callers of your API, which is why it's not commonly done (and isn't necessary in the header). It only affects the implementation of your function.
It's not particularly a bad thing to do, but the benefits aren't that great given that it doesn't affect your API, and it adds typing, so it's not usually done.
A: I use const were I can. Const for parameters means that they should not change their value. This is especially valuable when passing by reference. const for function declares that the function should not change the classes members.
A: I do not use const for value-passed parametere. The caller does not care whether you modify the parameter or not, it's an implementation detail.
What is really important is to mark methods as const if they do not modify their instance. Do this as you go, because otherwise you might end up with either lots of const_cast<> or you might find that marking a method const requires changing a lot of code because it calls other methods which should have been marked const.
I also tend to mark local vars const if I do not need to modify them. I believe it makes the code easier to understand by making it easier to identify the "moving parts".
A: On compiler optimizations: http://www.gotw.ca/gotw/081.htm
A: To summarize:
*
*"Normally const pass-by-value is unuseful and misleading at best." From GOTW006
*But you can add them in the .cpp as you would do with variables.
*Note that the standard library doesn't use const. E.g. std::vector::at(size_type pos). What's good enough for the standard library is good for me.
A: Sometimes (too often!) I have to untangle someone else's C++ code. And we all know that someone else's C++ code is a complete mess almost by definition :) So the first thing I do to decipher local data flow is put const in every variable definition until compiler starts barking. This means const-qualifying value arguments as well, because they are just fancy local variables initialized by caller.
Ah, I wish variables were const by default and mutable was required for non-const variables :)
A: There is a good discussion on this topic in the old "Guru of the Week" articles on comp.lang.c++.moderated here.
The corresponding GOTW article is available on Herb Sutter's web site here.
A: 1. Best answer based on my assessment:
The answer by @Adisak is the best answer here based on my assessment. Note that this answer is in part the best because it is also the most well-backed-up with real code examples, in addition to using sound and well-thought-out logic.
2. My own words (agreeing with the best answer):
*
*For pass-by-value there is no benefit to adding const. All it does is:
*
*limit the implementer to have to make a copy every time they want to change an input param in the source code (which change would have no side effects anyway since what's passed in is already a copy since it's pass-by-value). And frequently, changing an input param which is passed by value is used to implement the function, so adding const everywhere can hinder this.
*and adding const unnecessarily clutters the code with consts everywhere, drawing attention away from the consts that are truly necessary to have safe code.
*When dealing with pointers or references, however, const is critically important when needed, and must be used, as it prevents undesired side effects with persistent changes outside the function, and therefore every single pointer or reference must use const when the param is an input only, not an output. Using const only on parameters passed by reference or pointer has the additional benefit of making it really obvious which parameters are pointers or references. It's one more thing to stick out and say "Watch out! Any param with const next to it is a reference or pointer!".
*What I've described above has frequently been the consensus achieved in professional software organizations I have worked in, and has been considered best practice. Sometimes even, the rule has been strict: "don't ever use const on parameters which are passed by value, but always use it on parameters passed by reference or pointer if they are inputs only."
3. Google's words (agreeing with me and the best answer):
(From the "Google C++ Style Guide")
For a function parameter passed by value, const has no effect on the caller, thus is not recommended in function declarations. See TotW #109.
Using const on local variables is neither encouraged nor discouraged.
Source: the "Use of const" section of the Google C++ Style Guide: https://google.github.io/styleguide/cppguide.html#Use_of_const. This is actually a really valuable section, so read the whole section.
Note that "TotW #109" stands for "Tip of the Week #109: Meaningful const in Function Declarations", and is also a useful read. It is more informative and less prescriptive on what to do, and based on context came before the Google C++ Style Guide rule on const quoted just above, but as a result of the clarity it provided, the const rule quoted just above was added to the Google C++ Style Guide.
Also note that even though I'm quoting the Google C++ Style Guide here in defense of my position, it does NOT mean I always follow the guide or always recommend following the guide. Some of the things they recommend are just plain weird, such as their kDaysInAWeek-style naming convention for "Constant Names". However, it is still nonetheless useful and relevant to point out when one of the world's most successful and influential technical and software companies uses the same justification as I and others like @Adisak do to back up our viewpoints on this matter.
4. Clang's linter, clang-tidy, has some options for this:
A. It's also worth noting that Clang's linter, clang-tidy, has an option, readability-avoid-const-params-in-decls, described here, to support enforcing in a code base not using const for pass-by-value function parameters:
Checks whether a function declaration has parameters that are top level const.
const values in declarations do not affect the signature of a function, so they should not be put there.
Examples:
void f(const string); // Bad: const is top level.
void f(const string&); // Good: const is not top level.
And here are two more examples I'm adding myself for completeness and clarity:
void f(char * const c_string); // Bad: const is top level. [This makes the _pointer itself_, NOT what it points to, const]
void f(const char * c_string); // Good: const is not top level. [This makes what is being _pointed to_ const]
B. It also has this option: readability-const-return-type - https://clang.llvm.org/extra/clang-tidy/checks/readability-const-return-type.html
5. My pragmatic approach to how I'd word a style guide on the matter:
I'd simply copy and paste this into my style guide:
[COPY/PASTE START]
*
*Always use const on function parameters passed by reference or pointer when their contents (what they point to) are intended NOT to be changed. This way, it becomes obvious when a variable passed by reference or pointer IS expected to be changed, because it will lack const. In this use case const prevents accidental side effects outside the function.
*It is not recommended to use const on function parameters passed by value, because const has no effect on the caller: even if the variable is changed in the function there will be no side effects outside the function. See the following resources for additional justification and insight:
*
*"Google C++ Style Guide" "Use of const" section
*"Tip of the Week #109: Meaningful const in Function Declarations"
*Adisak's Stack Overflow answer on "Use of 'const' for function parameters"
*"Never use top-level const [ie: const on parameters passed by value] on function parameters in declarations that are not definitions (and be careful not to copy/paste a meaningless const). It is meaningless and ignored by the compiler, it is visual noise, and it could mislead readers" (https://abseil.io/tips/109, emphasis added).
*The only const qualifiers that have an effect on compilation are those placed in the function definition, NOT those in a forward declaration of the function, such as in a function (method) declaration in a header file.
*Never use top-level const [ie: const on variables passed by value] on values returned by a function.
*Using const on pointers or references returned by a function is up to the implementer, as it is sometimes useful.
*TODO: enforce some of the above with the following clang-tidy options:
*https://clang.llvm.org/extra/clang-tidy/checks/readability-avoid-const-params-in-decls.html
*https://clang.llvm.org/extra/clang-tidy/checks/readability-const-return-type.html
Here are some code examples to demonstrate the const rules described above:
const Parameter Examples:
(some are borrowed from here)
void f(const std::string); // Bad: const is top level.
void f(const std::string&); // Good: const is not top level.
void f(char * const c_string); // Bad: const is top level. [This makes the _pointer itself_, NOT what it points to, const]
void f(const char * c_string); // Good: const is not top level. [This makes what is being _pointed to_ const]
const Return Type Examples:
(some are borrowed from here)
// BAD--do not do this:
const int foo();
const Clazz foo();
Clazz *const foo();
// OK--up to the implementer:
const int* foo();
const int& foo();
const Clazz* foo();
[COPY/PASTE END]
Keywords: use of const in function parameters; coding standards; C and C++ coding standards; coding guidelines; best practices; code standards; const return values
A: I use const on function parameters that are references (or pointers) which are only [in] data and will not be modified by the function. Meaning, when the purpose of using a reference is to avoid copying data and not to allow changing the passed parameter.
Putting const on the boolean b parameter in your example only puts a constraint on the implementation and doesn't contribute for the class's interface (although not changing parameters is usually advised).
The function signature for
void foo(int a);
and
void foo(const int a);
is the same, which explains your .c and .h
Asaf
A: If the parameter is passed by value (and is not a reference), usually there is not much difference whether the parameter is declared as const or not (unless it contains a reference member -- not a problem for built-in types). If the parameter is a reference or pointer, it is usually better to protect the referenced/pointed-to memory, not the pointer itself (I think you cannot make the reference itself const, not that it matters much as you cannot change the referee).
It seems a good idea to protect everything you can as const. You can omit it without fear of making a mistake if the parameters are just PODs (including built-in types) and there is no chance of them changing further along the road (e.g. in your example the bool parameter).
I didn't know about the .h/.cpp file declaration difference, but it does make some sense. At the machine code level, nothing is "const", so if you declare a function (in the .h) as non-const, the code is the same as if you declare it as const (optimizations aside). However, it helps you to enlist the compiler that you will not change the value of the variable inside the implementation of the function (.ccp). It might come handy in the case when you're inheriting from an interface that allows change, but you don't need to change to parameter to achieve the required functionality.
A: I wouldn't put const on parameters like that - everyone already knows that a boolean (as opposed to a boolean&) is constant, so adding it in will make people think "wait, what?" or even that you're passing the parameter by reference.
A: the thing to remember with const is that it is much easier to make things const from the start, than it is to try and put them in later.
Use const when you want something to be unchanged - its an added hint that describes what your function does and what to expect. I've seen many an C API that could do with some of them, especially ones that accept c-strings!
I'd be more inclined to omit the const keyword in the cpp file than the header, but as I tend to cut+paste them, they'd be kept in both places. I have no idea why the compiler allows that, I guess its a compiler thing. Best practice is definitely to put your const keyword in both files.
A: All the consts in your examples have no purpose. C++ is pass-by-value by default, so the function gets copies of those ints and booleans. Even if the function does modify them, the caller's copy is not affected.
So I'd avoid extra consts because
*
*They're redudant
*They clutter up
the text
*They prevent me from
changing the passed in value in
cases where it might be useful or efficient.
A: As parameters are being passed by value,it doesnt make any difference if you specify const or not from the calling function's perspective.It basically does not make any sense to declare pass by value parameters as const.
A: There's really no reason to make a value-parameter "const" as the function can only modify a copy of the variable anyway.
The reason to use "const" is if you're passing something bigger (e.g. a struct with lots of members) by reference, in which case it ensures that the function can't modify it; or rather, the compiler will complain if you try to modify it in the conventional way. It prevents it from being accidentally modified.
A: Const parameter is useful only when the parameter is passed by reference i.e., either reference or pointer. When compiler sees a const parameter, it make sure that the variable used in the parameter is not modified within the body of the function. Why would anyone want to make a by-value parameter as constant? :-)
A: I know the question is "a bit" outdated but as I came accross it somebody else may also do so in future... ...still I doubt the poor fellow will list down here to read my comment :)
It seems to me that we are still too confined to C-style way of thinking. In the OOP paradigma we play around with objects, not types. Const object may be conceptually different from a non-const object, specifically in the sense of logical-const (in contrast to bitwise-const). Thus even if const correctness of function params is (perhaps) an over-carefulness in case of PODs it is not so in case of objects. If a function works with a const object it should say so. Consider the following code snippet
#include <iostream>
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class SharedBuffer {
private:
int fakeData;
int const & Get_(int i) const
{
std::cout << "Accessing buffer element" << std::endl;
return fakeData;
}
public:
int & operator[](int i)
{
Unique();
return const_cast<int &>(Get_(i));
}
int const & operator[](int i) const
{
return Get_(i);
}
void Unique()
{
std::cout << "Making buffer unique (expensive operation)" << std::endl;
}
};
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
void NonConstF(SharedBuffer x)
{
x[0] = 1;
}
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
void ConstF(const SharedBuffer x)
{
int q = x[0];
}
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
int main()
{
SharedBuffer x;
NonConstF(x);
std::cout << std::endl;
ConstF(x);
return 0;
}
ps.: you may argue that (const) reference would be more appropriate here and gives you the same behaviour. Well, right. Just giving a different picture from what I could see elsewhere...
A: Being a VB.NET programmer that needs to use a C++ program with 50+ exposed functions, and a .h file that sporadically uses the const qualifier, it is difficult to know when to access a variable using ByRef or ByVal.
Of course the program tells you by generating an exception error on the line where you made the mistake, but then you need to guess which of the 2-10 parameters is wrong.
So now I have the distasteful task of trying to convince a developer that they should really define their variables (in the .h file) in a manner that allows an automated method of creating all of the VB.NET function definitions easily. They will then smugly say, "read the ... documentation."
I have written an awk script that parses a .h file, and creates all of the Declare Function commands, but without an indicator as to which variables are R/O vs R/W, it only does half the job.
EDIT:
At the encouragement of another user I am adding the following;
Here is an example of a (IMO) poorly formed .h entry;
typedef int (EE_STDCALL *Do_SomethingPtr)( int smfID, const char* cursor_name, const char* sql );
The resultant VB from my script;
Declare Function Do_Something Lib "SomeOther.DLL" (ByRef smfID As Integer, ByVal cursor_name As String, ByVal sql As String) As Integer
Note the missing "const" on the first parameter. Without it, a program (or another developer) has no Idea the 1st parameter should be passed "ByVal." By adding the "const" it makes the .h file self documenting so that developers using other languages can easily write working code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "515"
}
|
Q: How does including a SQL index hint affect query performance? Say I have a table in a SQL 2005 database with 2,000,000+ records and a few indexes. What advantage is there to using index hints in my queries? Are there ever disadvantages to using index hints in queries?
A: First, try using SQL Profiler to generate a .trc file of activity in your database for a normal workload over a few hours. And then use the "Database Engine Tuning Advisor" on the SQL Server Management Studio Tools menu to see if it suggests any additional indexes, composite indexes, or covering indexes that may be beneficial.
I never use query hints and mostly work with multi-million row databases. They sometimes can affect performance negatively.
A: The key point that I believe everyone here is pointing to is that with VERY careful consideration the usage of index hints can improve the performance of your queries, IF AND ONLY IF, multiple indexes exist that could be used to retreive the data, AND if SQL Server is not using the correct one.
In my experience I have found that it is NOT very common to need Index hints, I believe I maybe have 2-3 queries that are in use today that have used them.... Proper index creation and database optimization should get you most of the way there to the performing database.
A: The index hint will only come into play where your query involves joining tables, and where the columns being used to join to the other table matches more than one index. In that case the database engine may choose to use one index to make the join, and from investigation you may know that if it uses another index the query will perform better. In that case you provide the index hint telling the database engine which index to use.
A: My experience is that sometimes you know more about your dataset then SQL Server does. In that case you should use query hints. In other words: You help the optimizer decide.
I once build a datawarehouse where SQL Server did not use the optimal index on a complex query. By giving an index hint in my query I managed to make a query go about 100 times faster.
Use them only after you analysed the query plan. If you think your query can run faster when using another index or by using them in a different order, give the server a hint.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: sharepoint 2007 - custom content type - filtered lookup column I had defined a custom content type, and I am trying to define a filtered lookup column. I can select the list from where to pick up the column I need, but I can't find any example of the needed format of query string. I can filter the list manually by appending "?FilterField1=columnName&FilterValue1=myValue" to the list URL.
Where can I find some examples of query strings for filtering the lookup column?
FilteredLookUp.jpg http://asimilatorul.com/media/so/FilteredLookUp.jpg
A: Have a look, I don't know if this could help you:
Filtered Lookup Lists in SharePoint
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Specman macro to do set subtraction with int_range_list objects I work with a bunch of sets in order to generate constrained random traffic, but I want to be able to call a Specman macro that computes the complement of a set with syntax like:
COMPLEMENT begin
domain=[0..10,24..30],
complementing_set=[2..3,27..30]
end
and have it generate:
[0..1,4..10,24..26]
Every time I need the complement of a set I'm using fully populated lists (e.g. {0;1;2;3....} ) and then removing elements, instead of using Specman's built-in int_range_list object. And I'm also doing a lot of these set calculations at run-time instead of compile-time.
A: You can try this:
var domain: list of int = {0..10, 24..30};
var complementing_set: list of int = {2..3, 27..30};
var complement: list of int = domain.all(it in complementing set);
The all pseudo-method generates a sublist of the parent list of all the elements in the parent list for which the condition in the parentheses holds.
A: In the recent versions of Specman, you can use the pre-defined set type, that serves exactly this purpose. For example, you can do things like this:
var s1: set = [1..5, 10..15];
var s2: set = [4..13];
var s3: set = s1.intersect(s2);
and even like this:
x: int;
y: int;
........
var s1: set = [x..y];
var s2: set = [1..10];
var s3: set = s1.union(s2);
etc.
A: one more way may be to use uints, say you have a 500 possible values:
domain : uint(bits:500);
complement : uint(bits:500);
set : uint(bits:500) = domain & ~complement;
you can later extract the indices with
set_l : list of uint = set[.]].all_indices(it==1);
depending on your domain to possible values ratio this method may be quicker to calculate
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Finding common blocks I have two files (f1 and f2) containing some text (or binary data).
How can I quickly find common blocks?
e.g.
f1: ABC DEF
f2: XXABC XEF
output:
common blocks:
length 4: "ABC " in f1@0 and f2@2
length 2: "EF" in f1@5 and f2@8
A: This is a great tool for such purposes.:
http://sourceforge.net/projects/duplo/
A: Wikipedia has some pseudocode for finding the longest common substring between two sequences of data. In your case, you simply extract all common substring from the table that are not prefixes of other common substrings (i.e. maximal common substrings).
A: The open-source PMD project has a cut-and-paste detector module which is mentioned on this page: http://pmd.sourceforge.net/integrations.html.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: ASP.NET and Timers Consider this code...
using System.Threading;
//...
Timer someWork = new Timer(
delegate(object state) {
//Do some work here...
},
null, 0, 60000);
HttpContext.Current.Application["SomeWorkItem"] = someWork;
Could this be dangerous? Caching a timer in the Application to perform some work in the background while your site runs seems safe, but I wondered if anyone has some experience with this.
I'm sure that writing a Service to run in the background would certainly be much better, but sometimes that isn't always an option. Is this an alternative?
A: The problem with this is that you are not guaranteed the process still being alive. IIS will reclaim the process basically whenever it feels like it, so you run the risk of it not being performed.
If you need this work done then you need to either code it into a web call, or have a service running in the background of the server.
A: This would generally be a bad idea, as System.Threading.Timer uses threads from the ThreadPool, the same as ASP.Net.
If for what ever reason your timer delegate blocks or stops, the timer will simply begin a new Thread after the timeout period, which eats in to the Threads available for ASP.net.
If they all begin blocking, effectively you will not be able to serve any more web requests (probably a bad thing)
A: That would be dangerous as there can be times when the worker process gets recycled or the AppDomain crashes and the Work Item is killed and you may want it to recover what it was doing, it may not be possible.
A Windows service may be OK if you can get that work item out into a service. If an HttpContext is required for the work though you may want to have a windows service call a webservice to do the call periodically that may work though likely not ideal.
A: That makes sense, but just for fun, what if the work doesn't need to run if the site gets shut down? If it's associated with the Application_Start event and only needs to run while people are browsing the site, what are the risks at that point?
Good answers, I'm just curious a little more about how that works on the inside.
A: I would recommend you setup a scheduled task to run a page on your site. I usually point the scheduled task to a .vbs file with the following:
On Error Resume Next
Dim objRequest
Dim URL
Set objRequest = CreateObject("Microsoft.XMLHTTP")
URL = "http://www.mywebsite.com/cron/pagetorun.ashx"
objRequest.open "POST", URL , false
objRequest.Send
Set objRequest = Nothing
A: Omar Al Zabir has an excellent post on using cache item callbacks for this purpose.
http://www.codeproject.com/KB/aspnet/ASPNETService.aspx?fid=229682&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=76&select=1334820
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can I unshelve to a different branch in tfs 2008? Let's assume that some developer in my team shelved his changes that he did in branch A. And I am working on branch B. Can I unshelve his changes into branch B? (By GUI or command prompt)
A: Alternative solution to tfpt that avoids having to merge each file manually
The problem with the tfs power tool is that you're doing a 'baseless merge' so have to confirm every file. I had a shelveset of over 800 files and I never trust the 'auto merge' button and didn't want to go through each file in turn - so I had to find another way!
*
*Download and install the TFS Shelveset Sidekick.
*The tool appears under 'Tools' in VS2010
*Run the 'Shelveset Sidekick' tool, click Search to show shelvesets
*Right click on your shelveset and select 'Export Shelveset'
*Save to an empty location such as C:\temp\shelveset-name
*Now have a complete directory structure containing JUST the new files
(Note: There's no progress bar when exporting - so if you have a large shelveset that takes a long time to export you'll just have to check in Windows Explorer (File>Properties>Size) that the files are still coming down if you think it's frozen).
You now just have to copy them over to the new branch with Windows Explorer.
This worked for me :
*
*Checking out the whole solution first (in the new branch)
*Close that solution
*Take TFS offline from within VS (Tool to do this) - see below for why this is important...
*Copy files over in Windows Explorer. The directory structure in c:\temp\shelveset-name will have to be renamed to correspond to the new branch. Tip: Make sure you copy to the right place!!!
*Bring VS online
*It should find all the changes and add the new files
*If it asks you to bind the sourcecontrol be sure to verify the path is correct for the new branch.
*Test - and then checkin the new files
Important: I've found that if you don't first take TFS offline then you'll end up with any new files (from your unshelves changeset) showing without a little red check mark and you'll have to exclude and include them again to get them to add. If anyone has an alternative solution to this problem I'd love to know - refreshing doesn't seem to work.
A: The Visual Studio Power Tools should let you do this.
C:\src\2\Merlin\Main>tfpt unshelve /?
tfpt unshelve - Unshelve into workspace with pending changes
Allows a shelveset to be unshelved into a workspace with pending changes.
Merges content between local and shelved changes. Allows migration of shelved
changes from one branch into another by rewriting server paths.
Usage: tfpt unshelve [shelvesetname[;username]] [/nobackup]
[/migrate /source:serverpath /target:serverpath]
shelvesetname The name of the shelveset to unshelve
/nobackup Skip the creation of a backup shelveset
/migrate Rewrite the server paths of the shelved items
(for example to unshelve into another branch)
/source:serverpath Source location for path rewrite (supply with /migrate)
/target:serverpath Target location for path rewrite (supply with /migrate)
/nobackup Skip the creation of a backup shelveset
For example to merge a shelve set called "Shelve Set Name" created on Branch1 to Branch2 use this:
>tfpt unshelve "Shelve Set Name";domain\userName /migrate /source:"$/Project/Branch1/" /target:"$/Project/Branch2/"
A: The shelf information includes the specific path it goes to. Unfortunately I don't know of any automatic way to unshelve to any location other than the one it was shelved to. The times I've wanted to do this I had to check out the equivalent files in the new branch, unshelve from the old branch, then manually copy the files over.
EDIT: Well, I guess I was doing it the hard way. I'll have to try out Curt's solution. :)
A: I spent good amount of time to get this done and I had few issues to overcome. It is possible but here few issues and few rules to follow to avois these issues
Error:
unable to determine the workspace
This particular issue was solved by running the command from source branch root folder. This is contrary to some answers on SO where they say to use "target" branch - no, use "source":
cd [your !!source!! branch root]
tfpt unshelve /migrate /source:"$/MyCollection/Development/Maint1.1" /target:"$/MyCollection/Development/Maint1.2" "myShelveset;UserName"
Second issue appeared after this. Seem that it couldn't connect to TFS server. What I realized, I have multiple VS installed and connected to different TFS servers. I was using VS12 and I had workspace and server connection. But I didn't realize that same connection needs to be replicated in VS13 for TFPT2013 to work. It connects to same server and workspace.
I also tried doing it using TFPT2015 but I installed it and it didn't install TFPT.exe hence it was useless. So I tried from TFPT2013 to TFS2015 and it worked for this particular command. I wonder, why not, if VS12/13 works fine against TFS2015?
To summarize
*
*Use CMD or DevCMD - doesn't matter
*run command from source branch root folder
*verify Team Explorer Server connection for specific VS
*TF Power Tools 2013 work against TFS v15, at least migrate option works
A: The following steps can be used for small size shelvesets (~20 files or less).
*
*On the shelveset and target branches, start by having all pending updates checked in or rolled back.
*On the shelveset branch, unshelve the files from the applicable shelveset.
*On the target branch, checkout any of the existing files that were in the unshelved shelveset.
*Compare the unshelved files on the shelveset branch with those on the target branch to identify those files that require merge updates (if any).
*If needed, manually make merge updates to the applicable files of the prior step and save these files in the target branch workspace.
*Copy the other shelveset files from the shelveset branch workspace to the target branch workspace.
*Compare the new updated files in the target branch workspace with the checked in files. Make any corrections as needed.
*Check in the new updated files on the target branch.
*Rollback the unhelved files on the shelveset branch.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105"
}
|
Q: C++ timing, milliseconds since last whole second I'm working on a C++ application that needs detailed timing information, down to the millisecond level.
We intend to gather the time to second accuracy using the standard time() function in <ctime>. We would like to additionally gather the milliseconds elapsed since the last second given by time().
Does anyone know a convenient method for obtaining this information?
A: There is not a portable solution to this problem, since ANSI C does not define a standard millisecond-accurate time function. If you're using Windows, you can use GetTickCount(), timeGetTime(), or QueryPerformanceCounter()/QueryPerformanceFrequency(). Keep in mind that these have different accuracies and different runtime costs.
There are other similar functions in other operating systems; I'm not sure what they are off the top of my head.
A: GetTickCount in Windows
gettimeofday in *nix
QueryPerformanceCounter in Windows for better resolution (though GetTickCount should do it)
A: High Resolution, Low Overhead Timing for Intel Processors
If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement.
Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy.
When I run this on my box, I get
11867927879484732
11867927879692217
it took this long to call printf: 207485
Here's the Intel developer's guide that gives tons of detail.
#include < stdio.h > // stackoverflow bug: pre tag eats the filenames,
#include < stdint.h > // so i had to put spaces in the angle brackets
inline uint64_t rdtsc() {
uint32_t lo, hi;
__asm__ __volatile__ (
"xorl %%eax, %%eax\n"
"cpuid\n"
"rdtsc\n"
: "=a" (lo), "=d" (hi)
:
: "%ebx", "%ecx");
return (uint64_t)hi << 32 | lo;
}
main()
{
unsigned long long x;
unsigned long long y;
x = rdtsc();
printf("%lld\n",x);
y = rdtsc();
printf("%lld\n",y);
printf("it took this long to call printf: %lld\n",y-x);
}
A: If you're on Unix, gettimeofday() will return seconds and microseconds, up to the resolution of the system clock.
int gettimeofday(struct timeval *tv, struct timezone *tz);
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
A: As other have said, there is not a portable way to do this.
What I do (On msvc++ and linux/g++) is I use the following class that use QueryPerformanceCounter on Windows and gettimeofday on Linux. It's a timer class that can get you the elapsed time between two calls. You might want to modify it to fits your needs.
#if defined(_MSC_VER)
# define NOMINMAX // workaround un bug dans windows.h qui define des macros
// "min" et "max" qui entrent en conflit avec la STL.
# include <windows.h>
#else
# include <sys/time.h>
#endif
namespace Utils
{
/**
* Implémente un chronométre qui mesure le temps etre deux appels.
*/
class CTimer
{
private:
# if defined(_MSC_VER)
LARGE_INTEGER m_depart;
# else
timeval m_depart;
# endif
public:
/**
* Démarre le timer.
*
* Cette fonction sert à démarrer le timer. Si le timer est déja
* démarrer, elle le redémarre simplement.
*/
inline void start()
{
# if defined(_MSC_VER)
QueryPerformanceCounter(&m_depart);
# else
gettimeofday(&m_depart, 0);
# endif
};
/**
* Retourne le nombre de secondes depuis le départ du timer.
*
* @return Nombre de secondes écoulés depuis le départ du timer
*/
inline float GetSecondes() const
{
# if defined(_MSC_VER)
LARGE_INTEGER now;
LARGE_INTEGER freq;
QueryPerformanceCounter(&now);
QueryPerformanceFrequency(&freq);
return (now.QuadPart - m_depart.QuadPart) / static_cast<float>(freq.QuadPart);
# else
timeval now;
gettimeofday(&now, 0);
return now.tv_sec - m_depart.tv_sec + (now.tv_usec - m_depart.tv_usec) / 1000000.0f;
# endif
};
};
}
A: Boost.DateTime has millisecond and nanosecond representations IF the underlying platform supports them. While it is using platform specific code, it is keeping those details out of your code.
If that is a big deal, they do have another way of doing platform independent subsecond resolution. This page a couple of paragraphs down talks about how to do it.
(From the Page)
For example, let's suppose we want to construct using a count that represents tenths of a second. That is, each tick is 0.1 second.
int number_of_tenths = 5;
//create a resolution independent count -- divide by 10 since there are
//10 tenths in a second.
int count = number_of_tenths*(time_duration::ticks_per_second()/10);
time_duration td(1,2,3,count); //01:02:03.5 //no matter the resolution settings
A: #include <ctime>
clock_t elapsed = static_cast<double>(clock() / CLOCKS_PER_SEC);
elapsed will be the elapsed processor time, in seconds.
resolution is operation system dependent, but is generally better than millisecond resolution on most systems.
A: Anything not in the (c)time.h header requires OS-specific methods. I believe all those methods are second resolution.
What OS are you working in?
A: Look into the QueryPerformanceCounter methods if this is for Windows.
A: Intel's Threading Building Blocks library has a function for this, but TBB is currently only availble on Intel and clones (that is, it's not available on SPARCs, PowerPC, ARM, etc.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What's the difference between SQL Server Management Studio and the Express edition? I'm using Express currently. What extra features do I get with the full edition?
A: Assuming you are talking about differences in the client tools and not the database engine, the only differences that I have found so far are the lack of reports and the profiler. In the full version, on the tree of objects you can right click and select from set of standard reports. In the Express version, that menu option is missing.
The express version does not install the profiler.
A: Most of the High-Availability options are missing from the Express Edition. The Express editions are great for development purposes. Here's the comparison facts:
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
A: The most annoying thing to me are the Import/Export options. Even devs need that.
A: One of the feature which prevents me from using Studio Express is the ability to import and export data via the SSIS (SQL Server Integration Services). It is hard to be a true DBA with just Studio Express. From a developer's standpoint, Studio Express would typically be sufficient.
A: There are no differences in Management Studio. The differences are in the database engine LIMITATIONS! The engine is the same but it will deny you some features.
Import/Export wizard in the express edition can be found at:
C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe
If you dont have it, dowload it from Microsoft:
http://go.microsoft.com/fwlink/?LinkId=65111
You could install the Microsoft SQL Server 2005 Express Edition Toolkit to get the cool toys, like the Import/Export wizard and the reports.
The profiler is not part of Management Studio. It is one more application that comes with the full version of the SQL Server. Even if you have it installed your express edition server engine will refuse to work with it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Can you force the serialization of an enum value into an integer?
Possible Duplicate:
How do I serialize an enum value as an int?
Hi, all!
I'm wondering if there's a way to force the serialization of an enum value into its integer value, instead of its string representation.
To put you into context:
We're using, in a web application that heavily relies on web services, a single baseclass for all our request headers, independantly of the type of request.
I want to add a Result field to the header, so we'll have a place to pass hints back to the calling app as to how the operation went on the web service side. We already have an enum declared to that effect, but since we have legacy apps that call on those web services that may not know about those enums, I'd like to send serialize those values as integers.
We've already had to cut down on the length of those headers by using the [XmlElement(ElementName = "string representationOfAttributeName")] because we occasionally exceeded IE maximum url length, and I wondered whether there's a similar Attributes to force the serialization of enum values into integers.
Anyone ever heard of such an attribute?
As ever, thanks for the help,
Pascal
A: Enums do serialize to ints. But generally, if you don't like the way one of your properties gets serialized to XML, you can just do this:
[XmlIgnore]
public MyThing MyThing { get; set; }
[XmlElement("MyThing")]
[EditorBrowsable(EditorBrowsableState.Never), Browsable(false)]
public string MyThingForSerialization
{
get { return //convert MyThing to string; }
set { MyThing = //convert string to MyThing; }
}
A: Enums do NOT serialize as int as default.
I am using XMLSerializer to serialize a class and i have the same problem as described.
(The example of Matt Howells seems to work if you change the scope of MyThingForSerialization to public but i would like to see a better implementation cause this creates 1 extra public property)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Optimize y = x*x in Galois field arithmetic I have this C-code to do multiplications over GF(8):
int32_t GaloisMultiply (int32_t a, int32_t b)
{
int32_t i;
int32_t mask = 0x100;
int32_t y = 0;
for(i=0;i<8;i++)
{
if(b & mask)
{
y ^= a;
}
mask >>= 1;
y <<= 1;
}
if(b & 0x1)
{
y ^= a;
}
return(y);
}
That's more or less the text-book implementation.
I wonder if I there is a clever optimization for above algorithm if I can assert that a is always b, e.g. I do squaring instead of multiplication. I'm not after a cryptographic use btw. I just want to make use of the fact that x*x in GF(8) interleaves the bits of x with zero bits one by one.
There are already quite clever methods to do the bit interleaving, but since I've found out that x*x in GF(8) does the bit interleaving thing (by accident) I can't stop trying to use it for bit-interleaving optimizations.
Any ideas?
A: Table-based? link
And when you are limited to x*x, it's a sparse matrix.
Here's another good paper (and a library)
A: int32_t GaloisMultiply( int32_t a )
{
int32_t y = 0;
int32_t b = a & 0x01ff;
while ( b )
{
if ( b & 1 )
y ^= a;
a <<= 1;
b >>= 1;
}
return y;
}
Or if you like:
int32_t GaloisMultiply( int32_t a )
{
int32_t y = 0;
for ( int32_t b = a & 0x01ff; b; b >>= 1 )
{
if ( b & 1 )
y ^= a;
a <<= 1;
}
return y;
}
The reason that this approach is more efficient than the original code above is primarily because the loop is only performed until all the 'interesting' bits in the argument are consumed as opposed to blindly checking all (9) bits.
A table based approach will be faster though.
A: Lookup table is definitely the fastest for polynomial basis galois squaring. It is also the fastest for multiplication when using GF(8), but the tables get too large for larger fields as used in ECC. For multiplication in larger fields, the best algorithm is the 'left to right combine' method...(see http://www.amazon.com/Elliptic-Cryptography-Springer-Professional-Computing/dp/038795273X algorithm 2.36, page 50).
A: You could probably write some assembly to do a slightly better job. However, I'd be pretty surprised if this was the bottleneck in your application; have you done any profiling? This function doesn't seem like it's worth optimizing.
A: This is probably not what you are looking for, but here's one minor speedup:
Pass only one argument, if they are guaranteed to be the same.
A: It might help the compiler a bit to mark "a" and "b" as const. Or unrolling the loop by hand. It would be sad if it helped, though...
Isn't it a patent minefield, by the way ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What is a solid, elegant, reusable piece of code for determining if an IEnumerable is empty, in .NET? I'm trying to find the most reusable, yet elegant, piece of code possible for determining if an IEnumerable. In the ideal, this should be a function I can call absolutely any time I need to tell if an IEnumerable is empty.
While I have developed an answer for .NET 3.5 that has worked well for me so far, my current thought is that there is no perfect answer, since an IEnumerable can technically encapsulate a collection (or queue of iterators) that modifies the underlying results as it iterates, which would cause problems. However, this would also be an impediment to implementing IEnumerable.Count(), and that didn't stop MS from providing it.
So I thought I'd put it to SO to see if someone has a better one, and in case someone else should find it useful.
Edit: Wow, I can't believe I didn't know about IEnumerable.Any. I knew it existed, but never bothered to check what it did. Let this be a lesson. Read the documentation. Just because a method name doesn't imply it does what you want, doesn't mean it doesn't do what you want.
A: !enumerable.Any()
Will attempt to grab the first element only.
To expand on how/why this works, any determines if any of the components of an IEnumerable match a given function, if none is given, then any component will succeed, meaning the function will return true if an element exists in the enumerable.
A: For .net 1/2:
IEnumerator e;
try
{
e = enumerable.GetEnumerator();
return e.MoveNext();
}
finally
{
if (e is IDisposable)
e.Dispose();
}
Or, with generics:
using (IEnumerator<T> e = enumerable.GetEnumerator())
{
return e.MoveNext();
}
A: You're right that there is no perfect answer. IEnumerable only supports iteration and doesn't guarantee that the enumeration is repeatable. You can't find out if an enumeration contains elements without calling MoveNext at least once, and once you've done so you can't guarantee to be able to reuse the enumeration: it is allowable for IEnumerable.Reset to throw a NotSupportedException. From http://msdn.microsoft.com/en-us/library/system.collections.ienumerator.reset.aspx:
"The Reset method is provided for COM interoperability. It does not necessarily need to be implemented; instead, the implementer can simply throw a NotSupportedException."
Extension methods like IEnumerable<T>.Count and IEnumerable<T>.Any need to call MoveNext under the covers. Useful wrappers, but don't avoid the fact that in the (rare) cases where an enumeration doesn't support Reset, you could have a problem.
A: One thing to be careful of with either of these methods is that not all enumerations can be rolled-back, for instance all implementations of System.Data.IDataReader can only run through once.
In these cases you don't really have a cost in a foreach loop, so long as you account for the fact that it might not even loop once.
A: The first method has a simple implementation of Empty extension method:
http://signum.codeplex.com/SourceControl/changeset/view/25903#510468
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How to correctly add a to a page with jquery and let the change event work? I've currently got multiple select's on a page that are added dynamically with ajax calls using jquery.
The problem I've had is I could not get the change event to work on the added select unless I use the onchange inside the tag e.g.
<select id="Size" size="1" onchange="onChange(this);">
This works, but I'm wondering if there's a way to get it to be assigned by jquery. I've tried using $('select').change(onChange($(this)); in the usual place of $(document).ready but that didn't work.
I've tried adding the event with bind after the ajax call but that did not work either.
Any better way to assign the event?
A:
$('select').change(onChange($(this));
You need to understand the difference between calling a function and passing it around as an object. Functions are first-class objects in JavaScript, they are an object just like anything else, so they can be stored in variables, passed as arguments to other functions, and so on.
The code as you have it calls the onChange function, and gives the result to jQuery's change function. You don't want that. The idea is to pass the function itself to jQuery's change function, and jQuery calls it when it is appropriate to do so.
When you have parentheses - () - after a function's name, then you are calling it. Otherwise, you are treating it as an object. So what you intend to do can be accomplished like this:
$('select').change(onChange);
A: I had a similar problem and found this solution here:
When you do something like this:
$('p').click( function() { alert('blah'); } )
All the currently existing 'p'
elements will have a click handler
attached. Now if you go on to add
other 'p' elements to the page they
will not have the your click handler attached to them. You would
need to "rerun" the
$('p').click( function() { alert('blah'); } )
on the new elements to attach the
handlers to them.
You might like to look at the
"LiveQuery" plugin as is manages
all newly added elements so they get
the previously attached handlers
attached to them when they're added
to a page.
Karl Rudd
So after you add the select's you'll have to repeat the change() call.
A: After adding the select to your page you need to add the change event to it:
$('#newSelect').change(
function()
{
onChange(this)
}
);
If your selects all have the same class, it's best to unbind first, and then rebind:
$('.classname').unbind("change");
$('.classname').change(
function()
{
onChange(this)
}
);
A: Thanks guys for the help, the answers worked but not for what I was doing. I finally figured it out using firebug. I was trying to assign the change event before the ajax had finished doing it's stuff (I'm new to this ajax stuff as you may have noticed).
I added the event assigning to a callback on the load() and now it all works great.
$(this).load('ploc002.php', {JobNumber: JobNumber, Edit: this.id}, function()
{
// The DOM has been changed by the AJAX call
// Now we need to reassign the onchange events
$('select,input').change( function()
{
onChange(this)
});
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can I bind an event handler to an instance in jQuery? I am trying to bind an event to a "method" of a particular instance of a Javascript "class" using jQuery. The requirement is that I in the event handler should be able to use the "this" keyword to refer to the instance I originally bound the event to.
In more detail, say I have a "class" as follows:
function Car(owner) {
this.owner = owner;
}
Car.prototype = {
drive: function() {
alert("Driving "+this.owner+"s car!");
}
}
And an instance:
var myCar = new Car("Bob");
I now want to bind an event to the drive "method" of my car so that when ever I click a button for example the drive "method" is called on the myCar instance of the Car "class".
Up until now I've been using the following function to create a closure that allows me to comfortably access instance members using the "this" keyword in my "methods".
function createHandler( obj, method ) {
return function( event ) {
return obj[method](event||window.event);
}
}
I've used it as follows:
document.getElementById("myButton")
.addEventListener("click", createHandler(myCar,"drive"));
How do I accomplish something like this with JQuery?
I'm specifically asking about associating "this" with a designated instance, the other cruft all around I can handle on my own.
A: Just use an anonymous function:
$("#myButton").click(function() { myCar.drive(); });
A: Try this :
$("#myButton").each(function() {
var $btn = $(this);
$btn.on('click',function(){
// Do whatever you want.
});
});
Here you first create a loop to target all #myButton elements (Which is wrong in your example, You should be using Class instead) like:
$(".myButton").each(...
Then we attach the click event handler to all of them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can't download file in IE7 but there isn't any issue in Firefox, Chrome, etc..? I have an script that receives an encrypted url and from that generates a download, the most critic par of the script is this:
$MimeType = new MimeType();
$mimetype = $MimeType->getType($filename);
$basename = basename($filename);
header("Content-type: $mimetype");
header("Content-Disposition: attachment; filename=\"$basename\"");
header('Content-Length: '. filesize($filename));
if ( @readfile($filename)===false ) {
header("HTTP/1.0 500 Internal Server Error");
loadErrorPage('500');
}
Downloads works as charm in any Browser except IE, I have seen problems related to 'no-cache' headers but I don't send anything like that, they talk about utf-8 characters, but there is not any utf-8 characters(and the $filename has not any utf-8 characteres neither).
A: I solved it by sending the headers
header('Pragma: public');
header('Cache-Control: max-age=0');
I didn't knew that session_start() send headers by it's own.
I found the answer in the comments section of: Error: Internet Explorer Cannot Download FileName from WebServer
A: This site has a problem similar to yours in IE6. To summarize:
session_start() by default sends a cache control header including "no-store".
Internet Explorer takes this a bit too literally, but doesn't have appropriate
error handling for the case, and as a result explodes cryptically when you
attempt to save the output page to disk.
Before session_start(), add "session_cache_limiter('none');", or look up that
function and tweak the limiter as appropriate (probably 'private' is closer to
the mark).
I realize the code snippet you posted does not include a call to session_start();, but I figured I'd share this possible solution in case you do have a call to it and just didn't show us.
A: Use this
In IE7 Browser go to internet option--> security--->custom level --->downloads
then enabled autometic prompting for file download.
this will solve the problem.
hope this will help
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Grafting a Git history onto an SVN branch The situation
I have a Git repo and an SVN repo that both hold the same source code but different commit histories. The Git repo has a lot of small well commented submits... while the SVN repo has a few huge commits with comments like "Lots of stuff".
Both series of commits follow the same changes made in the code and are roughly equivalent.
The desired outcome
I would like to switch to using Git-SVN without losing the detailed history from the current Git repo. This should be done by 'grafting' the history from the Git repo onto an SVN branch of the project (branched from the point I really started using Git).
Why would you do that? (history)
A while ago I started to play with Git. I started by setting up a Git repo in a project I had under SVN control. With a little config, I had both Git and SVN working in parallel on the same source code.
This was a great way for me to learn and play with Git, while still having the safety net of SVN. It was a sandbox with real data basically. I didn't have the time to really learn Git but I really wanted to tinker with it. This was actually a pretty good way to learn Git for me.
At first, after doing some edits, I would commit to SVN and then to Git... then play with Git knowing my changes were safely in SVN. Soon I was committing more frequently to Git than SVN... Now, SVN commits have fallen to an annoying chore I have to do sometimes.
When learning the difference between git revert and svn revert I was VERY glad I had been checking in to the SVN repo. I almost lost a few weeks' work assuming that the two worked the same.
I now know the glories of Git-SVN and I am using it happily on several other projects.
I fully realized when I started that I might lose my Git repo and have to setup a new one 'properly' using git-svn init... but having played with Git for a while now, I'm sure there is some way of hacking the Git history into SVN.
A: That could be tough to do what you want. You can import a git repo into svn via something like this: http://code.google.com/p/support/wiki/ImportingFromGit, but I think you will have conflicts. You could just recreate your SVN repo from scratch based on your git repo.
For future reference, it probably would've been easier to just use Git as an SVN client:
git-svn clone path/to/your/svn/repo
git-commit -a -m 'my small change'
vi some files to change.txt
git-commit -a -m 'another small change'
git-svn dcommit # sends your little changes as individual svn commits
A: It seems that this is NOT possible. While it is possible to have the current git repo joined to the current svn repo, it does not seem possible to replay the history of the git repo in to the svn repo.
The main problem I had was getting git-svn to 'latch' on to a single svn commit. The answer to this problem seems to be git-svn set-tree. This blog post was most helpful:
http://www.reonsoft.com/~john/blog/2008/06/05/git-first-git-svn-later/
This is as far as I could get trying to keep the history in svn:
git branch svn-reconsile HASH_OF_SECOND_COMMIT
git checkout -f svn-reconsile
git svn init file://path/to/repos/myproject/branches/git-import
git svn fetch
git svn set-tree HASH_OF_SECOND_COMMIT
git rebase git-svn
git merge master
git svn dcommit
The problem is that the git svn dcommit will only make one revision in svn... not one for each commit in the master branch.... therefor the history is squashed in svn.
So the easier solution is to simply jump start git-svn using set-tree and be satisfied that the history is still in git even if it's not in svn. This can be done with the following:
git svn init file://path/to/repos/myproject/branches/git-import
git svn fetch
git svn set-tree HASH_OF_MOST_RECENT_COMMIT
git rebase git-svn
If anyone has any idea how to get around the squashing problem (I have tried --no-squash) please comment! In leu of clever comments I'm just going to accept keeping the git history and grafting to the most recent svn revision using the second code chunk above.
A: I think this should be possible in one of two ways... I'll outline them now and try to flesh them out later if I can figure them out. If anyone can see how to flesh a part out or knows why a part won't work... please comment!
1 - In place using git-svn
(The following are pseudo commands THEY ARE NOT REAL - DO NOT USE THEM)
rm .svn
(configure git-svn '/myproject/branch/git-remerge')
git svn sync_versions --svn_revision=123 --hash=ad346f221455
git svn dcommit
2 - Using a separate git-svn repo as a proxy
(The following are pseudo commands THEY ARE NOT REAL - DO NOT USE THEM)
mkdir ../svn_proxy
cd ../svn_proxy
git svn init
git checkout hash_of_svn_branch_point
git pull ../messy_repo
A: You may want to checkout Tailor. I used it to convert a git repository into an svn one so that my work could be hosted in our company svn server. It's quite flexible, so it may be able to do what you want.
A: From the git svn repository that you are trying to migrate to, do something like the following:
git remote add old-repo <path-to-old-repo>
git fetch old-repo
# to browse and figure out the hashes, if that helps
gitk --all &
# for each branch you want to graft
git rebase --onto <new git svn branch base> <old-repo branch base> <old-repo branch tip>
# when done
git remote rm old-repo
For your information, you should also be able to do the same thing using git format-patch and git am, but git rebase should be more friendly.
A: I did a bunch of work on this when dealing with git histories for memcached from a few things. A lot of what I was working on was verifying everyone was properly credited for work.
I built a tool to generate a report showing exactly where trees converged regardless of commit hashes, git histories, authors, committers, etc... Take a look at this example report that we used to see where two unrelated repositories containing the same information converged.
From there, I just did lots of manual grafting and filter-branching after lots of google and mailing list searches to figure out who these people who contributed changes actually were.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: ADSI will not connect to IIS from XP Workstation I'm successfully using VBScript within WScript to remotely read and write IIS configurations from the server. When I attempt to run these same scripts from my desk box they fail, though. Example:
Dim vdir
Set vdir = GetObject("IIS://servername/w3svc/226/root")
Error = "Invalid syntax"
The code works perfectly when run from one IIS server to another, but I'd like to run it from my XP Workstation. It would seem reasonable that there's a download of ADSI available that will make things work from my desktop, but I cannot find one. I downloaded ADAM but that only got me a small portion of the functionality I need.
Any hints out there? Thank you.
A: Sounds like the IIS ADSI Provider isn't installed/registered (probable cause of the syntax error on the protocol IIS: in the string)
Just tracking down where the provider DLLs come from - suspect it gets installed with:
IIS 6.0 Management Pack
A: Stephbu is correct and that answer was helpful, but it is not sufficient. In order to use ADSI remotely from my XP workstation, I needed to install IIS 5.1. Once that was installed, all my scripts started working. If there is an installation that can make the scripts work without making my computer an IIS server, I am unaware of it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I checkout files under Perforce from within Emacs? I use Perforce for source control at work and I want to 'open for edit' files that under source control from within Emacs.
How can that be done? What do I need to setup in Emacs? Is there a plug in? I also want to perform other p4 operations such as submitting my changes, etc.
A: Perforce/Emacs Integration
http://p4el.sourceforge.net/p4.el.html
Once you have p4.el installed and ready to go you can use emacs' built-in help to review p4.el's functions: C-x p ? will bring up the list. C-h f p4-xyz provides defun information for p4-xyz. Each Perforce command has a corresponding p4.el command. The vc model is not followed. Use 'C-x p help commands' for Perforce help...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code readability? experimenting with Cockburn use cases in code
I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces.
Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar?
c# example snippets follow
Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly.
In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around.
So, in the main aspx page, we capture the first user control's event.
using MyCompany.MyApp.Web.UseCases;
protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e)
{
// some code here to respond and make "state" changes or whatever
//
// blah blah blah
// finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control)
UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase)
}
protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal)
{
switch (goal)
{
case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected:
//call some UI related methods here including methods for the other user controls if necessary....
break;
case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails:
//call some UI related methods here including methods for the other user controls if necessary....
break;
case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect:
//call some UI related methods here including methods for the other user controls if necessary....
break;
// more cases...
}
}
}
//also we have
protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal)
{
switch (goal)
{
case SeaLevel.CheckOutWorkflow.ChangedCreditCard:
// do stuff
// more cases...
}
}
}
So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this:
class SeaLevel...
class FishLevel...
class KiteLevel...
The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.
A: I've never done it, but I've often thought about writing code in UC style, with main success path first and extensions put in as exceptions caught down below. Have not found the excuse to do it - would love to see someone try it and code, even if after the experiment we conclude it's awful, it will still be interesting to try out and refer to.
A: I think this is a variation on the Mediator Pattern from Design Patterns (Gang of Four) -- so I would say that it is a valid way to do this. In the Pattern, they discuss that the complicated interaction between controls is the reason to use it.
Edit: Link to Mediator on Wikipedia
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Which configuration method do you prefer in .net? Why?
*
*You can use App.config; but it only supports key/value pairs.
*You can use .Net configuration, configuration sections; but it can be really complex.
*You can use Xml Serialization/Deserialization by yourself; your classes-your way.
*You can use some other method; what can they be? ...
Which of these or other methods (if there are) do you prefer? Why?
A: Put your configuration into a database. If you run your app on more than 1 machine (eg a client-server app) then all the per-machine config systems are a PITA. A single config area is the best way to place your configuration. Write a gui to manage it and you'll be very happy.
Rolling out app.config files to 200 client boxes.. its not fun, especially when one gets missed (and they do, believe me).
A: When key value pairs are not enough I use Configuration Sections as they are not complex to use (unless you need a complex section):
Define your custom section:
public class CustomSection : ConfigurationSection
{
[ConfigurationProperty("LastName", IsRequired = true,
DefaultValue = "TEST")]
public String LastName
{
get { return (String)base["LastName"]; }
set { base["LastName"] = value; }
}
[ConfigurationProperty("FirstName", IsRequired = true, DefaultValue =
"TEST")]
public String FirstName
{
get { return (String)base["FirstName"]; }
set { base["FirstName"] = value; }
}
public CustomSection()
{
}
}
Programmatically create your section (if it doesn't already exist):
// Create a custom section.
static void CreateSection()
{
try
{
CustomSection customSection;
// Get the current configuration file.
System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(@"ConfigurationTest.exe");
// Create the section entry
// in the <configSections> and the
// related target section in <configuration>.
if (config.Sections["CustomSection"] == null)
{
customSection = new CustomSection();
config.Sections.Add("CustomSection", customSection);
customSection.SectionInformation.ForceSave = true;
config.Save(ConfigurationSaveMode.Full);
}
}
catch (ConfigurationErrorsException err)
{
//manage exception - give feedback or whatever
}
}
Following CustomSection definition and actual CustomSection will be created for you:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="CustomSection" type="ConfigurationTest.CustomSection, ConfigurationTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" allowLocation="true" allowDefinition="Everywhere" allowExeDefinition="MachineToApplication" overrideModeDefault="Allow" restartOnExternalChanges="true" requirePermission="true" />
</configSections>
<CustomSection LastName="TEST" FirstName="TEST" />
</configuration>
Now Retrieve your section properties:
CustomSection section = (CustomSection)ConfigurationManager.GetSection("CustomSection");
string lastName = section.LastName;
string firstName = section.FirstName;
A: I was a network/system admin in the past, and now I develop internal utilities for database applications. What I've found is this:
Simple Non-Nested configuration files are the best for applications that won't be changing where they access their resources very much.
Anything more complex needs to go into a database with an administration UI. This only applies to regular business users. If you are worried about the database getting corrupted, then use the complex configuration file approach. Files tend to corrupt less than databases.
Now, if your users are other developers, then you will have a lot more flexibility on what to use to store your configurations.
A: If I can get away with it I will just use the App.Config, however, if I need something more complex I will use custom configuration sections. Yes it is a pain to get an understanding of in the beginning, but a unified configuration source, and familiar configuration for all settings is worth the time investment in my opinion.
A: I use a custom xml configuration file, where a different config file is used for each environment (dev/qa/prod). The config files are templates that are dynamically instantiated with things like host/port configurations for services - this makes multi environments and failover very easy as it can be handled by the template instantiation code.
Of course if you have very little config and are not concerned with multiple environments then app.config is more standard and is probably the best way to go.
A: I find NameValueCollectionHandler the easiest and best, and I generally would link off to an external config file via the configSource attribute.
I try to put the ABSOLUTE MINIMUM configuration in config files, with most of it being configured in code with an Application that is self-aware of its deployment environment (such as by machine name or IP Address if known). Of course this required much more pre-planning and knowledge of your environments, but much less headache when deploying.
A: I thing key/value configurations work pretty well for simple configurations files. It becomes a problem when the file starts to grow and difficult to maintain. We started to split configuration file to "common" and "specific" applications configurations. The file access is transparent to app, "common" values are the same in most cases, but "specific" differ for every deployed application.
A: I use a custom xml config file. Each setting has a key, value and type.
It has one main section that contains all settings and additional sections containing setting overrides for particular environments (dev, staging, live). This i don't need to replace sections of the file when deploying.
I have a small wrapper which you can call to get a particular setting or a dictionary containing all of them.
I recently created a T4 template that will read the config file and create a static strongly typed settings class. That's been a huge timesaver.
A: I keep most of my config in IoC container, e.g. Spring.Net.
A: If you have .NET 3.0 available, I find the XamlReader/XamlWriter very handy for storing settings. They can write/read any .NET object to XAML if:
*
*The object has a parameterless constructor
*The properties to read/write have public getters and setters
It is especially nice that you don't have to decorate your settings objects with any attributes.
A: dataset.WriteXML()/dataset.ReadXML() work pretty well for me when the app.config doesn't cut it anymore.
A: Mostly I prefer using custom xml file and Xml Serialization method to read and write this config files... Not restricted to key/value pairs and not complex to implement...
A: I've had good luck rolling my own special class that returns config data from a ".settings" file associated with the calling assembly. The file is XML, and the settings class exposes it publicly as an XDocument. Additionally, the indexer for this settings class returns element values from /settings/setting nodes.
Works great for simple applications where you just need a key/value pair access to settings, and works great for complicated settings where you need to define your own structure and use System.Xml.Linq to query the XML document.
Another benefit of rolling your own is that you can use FileSystemWatcher and callback Action type to automatically fire a method when the file changes at runtime.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: How do I branch an individual file in SVN? The subversion concept of branching appears to be focused on creating an [un]stable fork of the entire repository on which to do development. Is there a mechanism for creating branches of individual files?
For a use case, think of a common header (*.h) file that has multiple platform-specific source (*.c) implementations. This type of branch is a permanent one. All of these branches would see ongoing development with occasional cross-branch merging. This is in sharp contrast to unstable development/stable release branches which generally have a finite lifespan.
I do not want to branch the entire repository (cheap or not) as it would create an unreasonable amount of maintenance to continuously merge between the trunk and all the branches. At present I'm using ClearCase, which has a different concept of branching that makes this easy. I've been asked to consider transitioning to SVN but this paradigm difference is important. I'm much more concerned about being able to easily create alternate versions for individual files than about things like cutting a stable release branch.
A: You don't have to branch the entire repository. You could make branches of folders in your project (such as an include folder). As others have noted, you can also do a "copy" of just a single file. Once you have a copy of a file or folder, you "switch" to the branched file or folder to work on the branch version.
If you create a separate branches folder in the repository, you could copy your branched files there via server side commands:
svn copy svn://server/project/header.h svn://server/branched_files/header.h
Then you could switch that file to use the branches_files repository path
A: Sadly, I think the real answer here is that ClearCase handles this situation a lot better than Subversion. With subversion, you have to branch everything, but ClearCase allows a kind of "lazy branch" idea that means only a certain group of files are branched, the rest of them still follow the trunk (or whichever branch you specify).
The other solutions provided here don't really work as you intend, they are just copying the file to a different path. Now you have to do odd things to actually use that file.
Erm, sorry. That wasn't really a very good answer. But there isn't a good solution to this with Subversion. Its model is branch and merge.
Edit: OK, so expanding on what crashmstr said. You could do this:
svn cp $REP/trunk/file.h $REP/branched_files/file.h
svn co $REP/trunk
svn switch $REP/branched_files/file.h file.h
But wow!, is that prone to errors. Whenever you do a svn st you will see this:
svn st
S file.h
A bit noisy that. And when you want to branch a few files or modules within a large source repository it will start to get very messy.
Actually, there's probably a decent project in here for simulating something like ClearCase's branched files with svn properties and switching, writing a wrapper around the bog standard svn client to deal with all the mess.
A: Here is how I understand your problem. You have the following tree:
time.h
time.c
and you need to decline it for multiple architectures :
time.h is comon
time.c (for x386), time.c (for ia64), time.c (for alpha),...
Also in your current VCS you can do this by creating as many branches from time.c as needed and when you checkout the files from the VCS you automatically check the latest time.h from the common trunk and the latest time.c from the branch you are working on.
The problem you are concerned about is that if you use SVN when checking out a branch you will have to merge time.h from trunk very often or risk working on an older file (as compared to the trunk) that amount of overhead is not acceptable to you.
Depending on the structure of your source code, there might be a solution though. imagine that you have
/
/headers/
/headers/test.h
/source/
/source/test.c
Then you could branch /, and use the svn:externals feature to link your headers to the trunk's head. It only works on directories and bears some limitations with regard to committing back to test.h (you have to go in the header directory for it to work) but it could work.
A: A Subversion "branch" is just a copy of something in your repository. So if you wanted to branch a file you'd just do:
svn copy myfile.c myfile_branch.c
A: I don't think there is much point in branching a single file? There is no way to test it with the trunk code?
You could take a patch instead if you want to back out changes and apply them later on.
A: Are you sure you really need this feature in your VCS ?
Why not use the C preprocessor and #ifdef away the code you don't need ? Or any similar tool.
something like:
// foo.h:
void Foo();
// foo_win32.c
#ifdef _WIN32
void Foo()
{
...
}
#endif
// foo_linux.c
#ifdef _GNUC
void Foo()
{
...
}
#endif
Sometimes if it doesn't fit right, then it's not the right solution.
A: A branch in SVN is just a copy. I believe that to do it the way you are hoping to, you'd have to have each version of the file in a separate directory in the repository, and check it out into your source folder. I.E. treat that file like a separate project.
A: A branch in Subversion is exactly what you are talking about. All of the files are an exact copy of the trunk, with the exception of the ones you change. This is the "cheap copy" methodology talked about in the SVN Book. The only caveat is the need to merge the trunk into the branch from time to time to insure that the changes made there are reflected in the branch. Of course, if those changes are not desired, no trunk->branch merges need to happen.
One easy way to allow for trunk changes to be merged in automatically(which simulates the Clear Case paradigm) would be to use a pre-commit hook script to merge the trunk changes in prior to the commit.(in fact, this is always a good strategy to prevent code drift).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How can I resolve the drifting clock for my Virtual Machine? My Virtual Machine's clock drifts pretty significantly. There's documentation out there about dealing with this, but nothing seems to be working very well.
Anyone have any suggestions, things that worked well for them, ...
Supposedly updating regularly via ntp is not a good solution.
A: Just to add some data about why NTPD is not a good solution. NTPD is a daemon that tries to compensate for the local clock drift; if the "internal clock" drifts away by X number of seconds in a day, then instead of jumping ahead/back like a forced command as in "ntpdate " NTPD tries to add/remove some cycles to the clock so that in time, normally within 15 minutes, the clock runs accurately enough and the compensation overcomes this X numbers of seconds that the servers gains/losses in a day. This has the advantage that you won't see any time in the day repeated, which is a MUST for for transactional systems.
But to be able to do this, NTPD requires that the local clock does a reasonably good job, which normally means that the local clock won't drift apart more than 42 seconds a day (more or less; I am not sure of the exact number). This normally is a problem in Virtual Machines, since the the clock is software controlled, so if the HOST has too much overload, you could see that the CLIENT's clock will run more slowly, and if it doesn't then the clock could run too fast. The problem here for NTPD is that the local clock is not reliable and doesn't have a constant drift in time; it may be more or less depending on the overload of the HOST system.
So in this case it's better to install the client tools as has been suggested, and synchronize the CLIENT clock with the HOST's clock (normally referred as the "wall clock")
A: *
*Read you vmware documentation carefully before you listen to anyone. We are running ESX5.
Timekeeping best practices for Linux guests among other things says:
Ref: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427
NTP Recommendations
Note: VMware recommends you to use NTP instead of VMware Tools periodic time synchronization. NTP is an industry standard and ensures accurate time keeping in your guest. You may have to open the firewall (UDP 123) to allow NTP traffic.
This is a sample /etc/ntp.conf:
tinker panic 0
restrict 127.0.0.1
restrict default kod nomodify notrap
server 0.vmware.pool.ntp.org
server 1.vmware.pool.ntp.org
server 2.vmware.pool.ntp.org
driftfile /var/lib/ntp/drift
This is a sample (RedHat specific) /etc/ntp/step-tickers:
0.vmware.pool.ntp.org
1.vmware.pool.ntp.org
The configuration directive tinker panic 0 instructs NTP not to give up if it sees a large jump in time. This is important for coping with large time drifts and also resuming virtual machines from their suspended state.
Note: The directive tinker panic 0 must be at the top of the ntp.conf file.
It is also important not to use the local clock as a time source, often referred to as the Undisciplined Local Clock. NTP has a tendency to fall back to this in preference to the remote servers when there is a large amount of time drift.
An example of such a configuration is:
server 127.127.1.0
fudge 127.127.1.0 stratum 10
Comment out both lines.
After making changes to NTP configuration, the NTP daemon must be restarted. Refer to your operating system vendor’s documentation.
A: vmware have a really good PDF doc on this problem.
Basically, the host will slew the ticks delivered to your guests as it can.
Don't run NTP or timed or junk like that. Just install vmware-guestd and let the host slew your ticks. If you still lose ticks, then any other solution will have major drift too.
If you can, use a guest OS that has a low frequency tick rate. Newer versions of Linux come with 1000Hz ticks, but it used only to be 100Hz. That seems easier for the host to deliver. A kernel rebuild is usually needed to change the HZ value.
A: There is no definitive answer because several methods exist, each having its pros and cons. What one to chose depends on your tasks, server load, operating system, etc.
Read vmware_timekeeping.pdf for thorough understanding of the issue.
Quick recipes for Linux could be found in a separate KB article
A: The best solution to this problem is (if locally connected)
Install Local NTP server and put "service ntp restart" in an infinite loop with sleep time 30 seconds approx. by writing a code in "/etc/init.d/rc.local" file. Reboot system and time will be synchronized with the server computer.
A: Doesn't installing the virtual machine additions (tools) synchronize the clock between the guest and host OS?
A:
Supposedly updating regularly via ntp
is not a good solution
That's the solution I would recommend, though. Why is it not considered good at your location?
A: Install NTP if you don't already have it.
ntpdate will set the clock correctly, then ntpd can keep the clock accurate.
The NTP pool project provides a large pool of NTP servers to pick from.
Edit just noticed you said you think NTP is not a good solution - why? If you're worried about the effect of the clock changing, NTP is the ideal, as ntpd does not jump the clock forwards or backwards, instead it "slews" the clock by speeding it up/down slightly until it's back in line with the correct time.
A: I had the same problem and solved it by
*
*installing vmware-guestd
*sending the kernel an option clocksource=acpi_pm
*running hwclock -s hourly as root.
A: This is an old issue but one that was affecting us recently. What I found was that any of our vm's that were running vmware tools were affected by the issue.
More recently we had started using open-vm-tools and on those vm's the option was not set. Since open-vm-tools is fully supported and recommended by Vmware I would suggest using it over vmware tools: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2073803
If open-vm-tools is in a repository that you use it is also simple to install via yum install or apt-get install etc.
A: You can use the cmd and
net time \\computer_name /set
to set the clock remotly (or in a script for example)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Adding version control to an existing project I am working on a project that has grown to a decent size, and I am the only developer. We currently don't use any version control, but I definitely need to start.
I want to use Subversion. What would be the best way to transfer an existing project to it?
I have a test server that I use for developing new features, then transfer those files to the 2 production servers. Is there a tool that will automate the upload to the test, then the deployment to the live servers?
All this is developed in ASP.NET using Visual Studio (if that matters)
A: I didn't see anybody addressing this part of your question/post:
Is there a tool that will automate the
upload to the test, then the
deployment to the live servers?
One gotcha is that Subversion creates hidden .svn folders in your working copy. One of the solutions is to use the svn export command. That will make a copy of your repository on another directory without the .svn folders.
As far as I know there is no automated tool for this. You can create a batch file that will issue the svn export command like this:
svn export C:\MyReporitosy\Path C:\DestinationPath
Just include this as part of your deployment process. Make sure to deploy your code from this exported directory and not your working copy. You should be fine then.
A: +1 for Matt Howell's answer.
I don't know how many times I've added a new project by creating the directory in the repo, Importing the new project into it, then Checking it out again. This is why Matt's answer is best:
*
*I create a project called FRED and do some work
*I create a directory in SVN repo, and import FRED into it.
*But the FRED directory is still not under source control - it lacks .svn files, so I need to check it out, into a new directory, let's call it FRED-NEW, recreating all the files
*I then need to delete FRED, leaving me nervous something's got lost or corrupted along the way.
As Matt says, check out a step earlier, while the folder in the SVN repo is still empty:
*
*Create a directory in SVN AND CHECK IT OUT into FRED-NEW.
*Copy FRED-NEW/.svn into FRED/
*Right-click Add all the files. FRED is now under source control without recreating and deleting.
Still better:
*
*Create a directory in SVN, and BEFORE adding files to it, check the empty directory out DIRECTLY INTO FRED.
*Right-click Add all files.
Also if you're using CLI and didn't think of this until after committing the new files, see alroc's answer below: --force when checking out into FRED.
A: Import your existing base into a SVN repository, check it back out and begin working again.
A: You should look at Visual SVN, which integrates seamlessly into Visual Studio.
A: Technical issues aside, just get SVN and start using it. You will see the immediate benefits (looking at code history, diff-debugging to see what change introduced the bug that was not present last week), and you will never want to look back.
I, personally, do not like my source control integrated in the IDE. I use Tortoise SVN that integrates with Windows Explorer and lets you check in, diff, merge, etc files straight from the OS.
A: No SVN server required.
Use Tortoise Mercurial http://sourceforge.net/project/showfiles.php?group_id=199155
Setup local repo
*
*Download and Install
*Open an explorer window to the base directory of you project
*Right-Click -> TortoiseHG -> Create Repository Here -> Ok
*Right-Click -> HG Commit...
*Type your commit comment, select which files to track, and click Commit
Setup remote repo over file share (other transport methods available)
*
*Open explorer window to remote folder
*Right-Click -> TortoiseHG -> Clone a Repository
*
*Alternatively, just copy your local repo over
Updating remote repo after committing local
*
*Open explorer window to remote folder
*Right-Click -> TortoiseHG -> Synchronize
*Select "Update to new tip" in Pull menu
*Enter the path to your local repo into the "Remote Path:" input box
*Click Pull
A: To expand a little on the previous answer...
1) Create a new SVN repository
2) Commit all the code you've worked on so far to it
3) Check all that code OUT again, to create a working copy on your dev machine
4) Work!
It's definitely not a hurdle, really.
A: Subversion Server install...
Subverison Client Libraries instal...
Install Ankh for integration with VS
Install Tortoise for File Manager integration
In File Manager, right click on top level direction with Solution... Import...
A: I wondering why you had chosen Subversion? If your project is not using any vc, may be you should consider to use Mercurial or Git either.
Their stronger point is that they don't need a central repository, that means that your programmers can checkout your project, go to their home, work (without having to have a connection to your servers), and the next day come back to the office and sync their repositories.
If SVN is not a mayor requirement, i recommend to consider any of both dvc systems.
A: It may be overkill for what you require but creating a SVN repository on one machine, and then on another setting up an contious integration server. TeamCity is one I would recommend. (you may also be able to use virtual PC for this if hardware is at a premium)
This will allow you to add the custom build steps to deploy onto the production servers once a build is complete and tested.
TeamCity for more information. Thi also provides a plugin into Visual Studio as well
A: It is easy to start using Subversion. Download TortoiseSVN, which integrates SVN into Windows Explorer. Download AnkhSVN for VS integration. Set up svnserve as a Windows Service (it's in the docs).
Then all you do is check out an empty directory from svn and copy all your code files into it. Then add them with Tortoise, and commit. When you change files in Visual Studio, Ankh will show you which files you've changed and you can commit them there.
We do all our deployment with NAnt scripts, although you may find batch scripts and xcopy sufficient.
A: I'm more familiar with Perforce than subversion, but putting a project under version control is not at all hard.
Once you've installed and got your version control software running, clean out your code directory of everything that isn't a source (for instance, run 'make clean'). Then use just use the command to add new files to your repository, recursively. Follow that by a submit, and you are done. I recommend checking out onto a different machine and building at least once to make sure you have everything you need to build.
As for deploying onto servers, that's not really a version control problem. You would typically either put that into your build system (i.e. 'make testinstall', 'make install') or just write shell scripts.
A: +1 on the answers provided by Joe and Steve.. I would also mention that it is important to set up your ignore lists or SVN Props so that you don't check in user files, resharper setting.
Also make sure you include everything that may be needed for the build, such as build scripts, 3rd party assemblies, external tools such as nunit, nant etc
While you are at it, I would highly recommend you look at CC.net, and getting continuous integration server installed to automate your build.
Having source control is one thing, using it properly is another entirely. Remember to check in frequently and early.
A: The easy answer is Subversion along with Tortoise SVN.
I've used Subversion with visual studio, and I've implemented it with an existing project. There is a free Visual Studio plugin called Ankh, which I used with some success. However, I have had some issues where Ankh refuses to stay in sync with the real state of the files as reflected in the .svn metadata (it did things like insist a file needed to be updated when tortoise would show me it was up to date). In these instances, a visual studio restart fixed the issues, but that is painful and tedious for me.
Currently, I stopped using Ankh and just work my project as normal in VS and then use Tortoise and windows explorer to check them in/out. This works flawlessly. No VS refreshes or restarts necessary.
A: 1) Create a new repository - you can create it on the test server and later transfer it to a dedicated server/NAS if it makes things simple for you.
2) Import all your existing source code into the repository.
3) You can create using the 'svn' command line tool (and its related tools, like svnadmin) a batch file which will automate the upload and deployment process (the latter combined with the compiler of course).
You can find more information here:
SVN book - just start read it - you don't need to read it all to start and get svn running.
MSBuild - a build automation platform by microsoft, although it may be an overkill, depening on the size of your project.
If you have Visual Studio on the same computer as the batch file, you may use it to compile your solution, although I'm suspecting you'll hit scaling problems in the future.
A: Version control and deployment are two separate issues (although a good version control system can make the deployment a more consistent, reproducible process). Once you have your version control server set up you can use a set of simple script/batch files to automate checking the code out and deploying it to the server.
A: To add to the previous answers, I would recommand also the obvious (but they should be made explicit) advices:
*
*choose a stable state of your project to import in your repository (whatever tool you choose)
*create immediately a label (or tag in SVN), once the import is done and 'checked' (like 'does it compile ?', 'have we get all setting files needed ?', ...)
*think about the different 'development efforts' you will need to do during the lifecycle of this project, and that will get you a good idea of what your branches should look like.
(maintenance branch for an old version already in production while your are developing the next version, merge branch to isolate complicated merges, patch branches, ...)
Now, beware:
Is there a tool that will automate the upload to the test, then the deployment to the live servers?
That part of your question refers to a 'release management' process, and that is very different that 'version management'.
I am not sure the version tool you will choose can help actively. Especially when you consider there should be no version control tool on a production server (in order to keep the dependency of a production server with any tool to a minimum: only monitoring and reporting tool should be allowed, in addition of course of your programm - here a web server for instance -)
A: in addition to all the other comments about the practicalities of getting the project under source control I'd encourage you to take a look at Streamed Lines: Branching Patterns for Parallel Software Development as guide to code line and branching policies - might save you some re-work later.
Also Eric Sink had a great collection of posts introdcing the various source code control concepts - Source Control HOWTO
A: I have recently started to love the simplicity of Bazaar to solve the problem of starting version control after already having hacked on an application for a while.
With bazaar it is really only a few simple commands:
1) bzr init
2) bzr add [the files you are interested in]
3) bzr commit
Note that this does not setup any central repository. But you can do that as well.
Regarding using it as a deployment tool I am quite sure I read something about it not long ago. The documentation is really good anyway.
A: The question is clearly on Subversion (SVN is the alias).
These are the steps:
1) Create a New Repository (if needed, if using VisualSVN Server then very easy)
2) Right click on the Folder of which you want to put the folders and files into your repository
3) With having the right click menu go to TortoiseSVN
4) Choose IMPORT
5) Place in trunk (best practice)
e.g. https://computername:8443/svn/MyCoolCode/trunk
A: The question was focused clearly towards an existing project. I have not yet found an appropriate answer in this thread and played around until I had the solution. The hassle comes when you checkout, as described in many answers, and you end up with a second versioned folder, your existing project remains unversioned and you are reluctant to copy/paste and/or rename your folders.
The solution to this problem is as follows:
Suppose you have a bunch of projects in PC1, all collected as subfolders under a folder named "Projects" and you want to version all of them in one repository. Then you would like to check it out on PC2 with the same folder structure. Then you do the following:
*
*Create a folder for your repository on the network drive and make sure that it is named "Projects", i.e. the same name as the parent folder of your development projects. For example create a folder like X:\Repository\Projects on a network drive.
*Right Click on this folder and choose TurtoiseSVN -> Create Repository Here
*Right Click on your development "Projects" folder on PC1 (your existing project) and press TurtoiseSVN -> Import. Choose the correct repository with the name "Projects" (in case you already have other repositories on the network drive). Now your projects are at the disposal for other PC's.
*Make a parent folder on PC2 where you want to have the "Projects" folder.
*Right Click on this Parent Folder on PC2 and choose SVN Checkout. The projects are now available on PC2.
*From now on, you use "commit" to store changes in the repository and "update" to get the latest version from the repository.
A: I use mercurial on my desktop and I love it. Creating the repository is super easy...
hg init /path/to/repository
Add the files...
hg add /folder/pattern
OR
hg add FILE
Then you just commit...
hg commit
And you're up and running.
The other great thing is when I want to sync to my laptop it's just...
hg pull //desktop_name/path/to/repo/
hg update
The thing that I like about subversion is the pluggin for Visual Studio, I stay on top of my updates more when the status icons are starring me in the face all of the time. The pluggin may definitely make up for the hassel of setting the svn repository up if you're going to be working with the one project a lot.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Handling large numbers in C++? What is the best way to handle large numeric inputs in C++ (for example 10^100)?
For algorithms I usually switch over to ruby and I sometimes use strings.
Any other good methods?
A: If you wish to make your own code for the purpose try using strings to store big numbers... you can then create basic ops like + - / * on them... for example -
#include <iostream>
using namespace std;
string add (string &s1, string &s2){
int carry=0,sum,i;
string min=s1,
max=s2,
result = "";
if (s1.length()>s2.length()){
max = s1;
min = s2;
} else {
max = s2;
min = s1;
}
for (i = min.length()-1; i>=0; i--){
sum = min[i] + max[i + max.length() - min.length()] + carry - 2*'0';
carry = sum/10;
sum %=10;
result = (char)(sum + '0') + result;
}
i = max.length() - min.length()-1;
while (i>=0){
sum = max[i] + carry - '0';
carry = sum/10;
sum%=10;
result = (char)(sum + '0') + result;
i--;
}
if (carry!=0){
result = (char)(carry + '0') + result;
}
return result;
}
int main (){
string a,b;
cin >> a >> b;
cout << add (a,b)<<endl;
return 0;
}
A: Are you looking for how to perform operations on the large inputs you receive? There is a big integer C++ library (similar to Java) that allows you to perform arithmetic operations...
A: assuming you are talking about inputting numbers, double precision would get you up to 1.7976931348623157 x 10^308
A: You might want to have a look to gmplib, an arbitrary precision number handling library for C and C++
A: If you want it to be accurate, you need a library made to deal with big numbers. Java has BigInt that will always be accurate no matter how many digits you want to take it to, and provides math operations on them. All the source code is included, you could transfer it, but this really isn't the kind of thing C++ is best at--I'd use a JVM based language and use one of the Big libraries.
I don't think I'd use ruby for this unless you wanted it to be slow, and I'm assuming that since you are talking about C++, speed is somewhat of a design consideration.
A: As others have already pointed out, there are various bignum/arbitrary precision libraries in C++ that you would likely find useful. If speed isn't necessary, I'm under the impression that Python and Lisp both use bignums by default.
A: It sounds like you're looking for a way to enter Arbitrary Precision numbers.
here are two libraries you could use: GMP and MAPM
A: Check out The Large Integer Case Study in C++.pdf by Owen Astrachan. I found this file extremely useful with detail introduction and code implementation. It doesn't use any 3rd-party library. I have used this to handle huge numbers (as long as you have enough memory to store vector<char>) with no problems.
Idea:
It implements an arbitrary precision integer class by storing big int in a vector<char>.
vector<char> myDigits; // stores all digits of number
Then all operations related to the big int, including <<, >>, +, -, *, ==, <, !=, >, etc., can be done based on operations on this char array.
Taste of the code:
Here is the header file, you can find its cpp with codes in the pdf file.
#include <iostream>
#include <string> // for strings
#include <vector> // for sequence of digits
using namespace std;
class BigInt
{
public:
BigInt(); // default constructor, value = 0
BigInt(int); // assign an integer value
BigInt(const string &); // assign a string
// may need these in alternative implementation
// BigInt(const BigInt &); // copy constructor
// ~BigInt(); // destructor
// const BigInt & operator = (const BigInt &);
// assignment operator
// operators: arithmetic, relational
const BigInt & operator += (const BigInt &);
const BigInt & operator -= (const BigInt &);
const BigInt & operator *= (const BigInt &);
const BigInt & operator *= (int num);
string ToString() const; // convert to string
int ToInt() const; // convert to int
double ToDouble() const; // convert to double
// facilitate operators ==, <, << without friends
bool Equal(const BigInt & rhs) const;
bool LessThan(const BigInt & rhs) const;
void Print(ostream & os) const;
private:
// other helper functions
bool IsNegative() const; // return true iff number is negative
bool IsPositive() const; // return true iff number is positive
int NumDigits() const; // return # digits in number
int GetDigit(int k) const;
void AddSigDigit(int value);
void ChangeDigit(int k, int value);
void Normalize();
// private state/instance variables
enum Sign{positive,negative};
Sign mySign; // is number positive or negative
vector<char> myDigits; // stores all digits of number
int myNumDigits; // stores # of digits of number
};
// free functions
ostream & operator <<(ostream &, const BigInt &);
istream & operator >>(istream &, BigInt &);
BigInt operator +(const BigInt & lhs, const BigInt & rhs);
BigInt operator -(const BigInt & lhs, const BigInt & rhs);
BigInt operator *(const BigInt & lhs, const BigInt & rhs);
BigInt operator *(const BigInt & lhs, int num);
BigInt operator *(int num, const BigInt & rhs);
bool operator == (const BigInt & lhs, const BigInt & rhs);
bool operator < (const BigInt & lhs, const BigInt & rhs);
bool operator != (const BigInt & lhs, const BigInt & rhs);
bool operator > (const BigInt & lhs, const BigInt & rhs);
bool operator >= (const BigInt & lhs, const BigInt & rhs);
bool operator <= (const BigInt & lhs, const BigInt & rhs);
A: Consider boost::cpp_int
#include <boost/multiprecision/cpp_int.hpp>
#include <iostream>
int main()
{
using namespace boost::multiprecision;
cpp_int u = 1;
for(unsigned i = 1; i <= 100; ++i)
u *= i;
// prints 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 (i.e. 100!)
std::cout << u << std::endl;
return 0;
}
A: Well I think the best way to do such arithmetic calculation is by using strings. Give input as command line arguments and then manipulate the whole logic using string functions like atoi() and itoa()! But, hey can this be done for multiplication and Division? I think in this way strlen of strings entered doesn't matter for programming for compiler until the logic is fine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Getting started using Linq, what do I need? Basically what the title says. (Forgive me because I am a .NET newb)
In my department, we have a server running .net 3.5 and ever since I got into this section I have been using LINQ. However, I am starting a personal project on a different server (obviously), so 2 questions:
What do I need to get up and running with LINQ?
What does the server need to run LINQ?
Will .net 2.0 work on the server?
The code behind would be C# if that matters.
Edit:
Would I have to compile it in 3.5 or would 2.0 work?
A: To get up and running, I would definitely recommend checking out LINQ in Action.
Your compiler needs to be .NET 3.5 framework. If you are copying over only compiled code, then you will not need 3.5 on your server, you only need it on your development machine. This can help if your server admin is unwilling to install the 3.5 framework on your server. However, if you are publishing source code, say to a development server to compile, then yes that server will need 3.5.
Once you have the 3.5 framework installed, you can run web apps either as 2.0 or 3.5. All you have to do is specify it in your Web.Config file.
If you are interested in working with LINQ to SQL and managing dbml files, you will need Visual Studio 2008. However, Visual Studio 2005 will still compile dbml files properly, given that you have the 3.5 framework installed.
A: I would encourage you to check out LinqPad as a learning tool. It's a standalone application that lets you play with Linq queries without worrying about getting it to run on a server.
A: probably should read Scott Guthries series of articles on LINQ:
Here are links to the various 8 parts. you will need framework 3.5 if I am not mistaken to make this work.
The series with detailed step by step instructions starts here: Part 1
A: You actually only need .net 3.5 on the development machine. If you have 2.0 SP1 on the server, and you set all the .net references in your project of version 3.5.0.0 to "copy local", you can run a 3.5 executable on a 2.0 machine.
makeitlooklikethis http://img90.imageshack.us/img90/4217/35haxxx2.png
As a side note, you may have to delete the yourexecutable.exe.config in order for it to run. For some reason 2.0 sp1 has issues with .configs created by 3.5
I have two live apps running with this setup currently, it works very well.
A: I'm assuming you're talking about LINQ to SQL specifically.
You would only need v3.5 of the framework installed on your development machine and the server.
The server doesn't run linq; linq will in the end send SQL statements to your server.
The language doesn't matter.
A: You have to at least have .Net 2.0 sp1 on your server, and you will have to copy locally handful of assemblies, like System.core, etc...
but without SP1 you will not be able to execute LINQ code because of issues in System.dll.
A: LINQ requires framework 3/3.5, because it use a lot of extensions of 3/3.5 (Extension method, lambda expression Func<> delegate etc).Then it doesn' t work with 2.0 version.
If you develop a project using linq on your local pc, simply make a standard deploy (e.g. copy dll, aspx etc) to server production and it will works. No special actions are required.
i hope i help you
A: LINQ runs on .NET CLR 2.0 runtime, but to be able to compile and use your LINQ code you need .NET 3.5 (C# 3.0 compiler), since .NET 3.5 adds some LINQ-related assemblies to the framework.
A: LINQ requires .NET v3.5
An excellent tool for getting to know and practice LINQ is Joseph Albahari's LINQPad
A: OK, first about the .NET 3.5 thing. The runtime(CLR) of 3.5 is still the same as in .NET 2.0. There are a bunch of new libraries plus (among other things) a new C#-Compiler.
So to run LINQ in theory you just need to have .NET 2.0 installed and throw a few additional assemblies into the GAC. If you want to know which ones, please add this to your question, I'm too lazy to look it up now.
If you can, just install the .NET 3.5 Framework on your server and yes, all .NET 2.0 programs will work there as before. Don't forget to scan the readme though :-)
I don't really understand your "What do I need to get up and running" question though. Do you want to to learn about LINQ? Try LinqPad. Do you want to develop solutions with LINQ? Then at a minimum I would recommend VS2008 Express.
To compile LINQ expressions you have to use the C# 3.0 compiler which isn't in the .NET 2.0 framework. As stated above the output of that compiler is compatible with .NET 2.0 though.
A: ZAIN Naboulsi has some LINQ goodies. Check 'em out!
http://blogs.msdn.com/zainnab/archive/2008/03/29/collection-of-linq-resources.aspx
A: Keep learning LINQ in simple by following Hooked on LINQ
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Can I use Terracotta to scale a RAM-intensive application? I'm evaluating Terracotta to help me scale up an application which is currently RAM-bounded. It is a collaborative filter and stores about 2 kilobytes of data per-user. I want to use Amazon's EC2, which means I'm limited to 14GB of RAM, which gives me an effective per-server upper-bound of around 7 million users. I need to be able to scale beyond this.
Based on my reading so-far I gather that Terracotta can have a clustered heap larger than the available RAM on each server. Would it be viable to have an effective clustered heap of 30GB or more, where each of the servers only supports 14GB?
The per-user data (the bulk of which are arrays of floats) changes very frequently, potentially hundreds of thousands of times per minute. It isn't necessary for every single one of these changes to be synchronized to other nodes in the cluster the moment they occur. Is it possible to only synchronize some object fields periodically?
A: I'd say the answer is a qualified yes for this. Terracotta does allow you to work with clustered heaps larger than the size of a single JVM although that's not the most common use case.
You still need to keep in mind a) the working set size and b) the amount of data traffic. For a), there is some set of data that must be in memory to perform the work at any given time and if that working set size > heap size, performance will obviously suffer. For b), each piece of data added/updated in the clustered heap must be sent to the server. Terracotta is best when you are changing fine-grained fields in pojo graphs. Working with big arrays does not take the best advantage of the Terracotta capabilities (which is not to say that people don't use it that way sometimes).
If you are creating a lot of garbage, then the Terracotta memory managers and distributed garbage collector has to be able to keep up with that. It's hard to say without trying it whether your data volumes exceed the available bandwidth there.
Your application will benefit enormously if you run multiple servers and data is partitioned by server or has some amount of locality of reference. In that case, you only need the data for one server's partition in heap and the rest does not need to be faulted into memory. It will of course be faulted if necessary for failover/availability if other servers go down. What this means is that in the case of partitioned data, you are not broadcasting to all nodes, only sending transactions to the server.
From a numbers point of view, it is possible to index 30GB of data, so that's not close to any hard limit.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Can I prevent a integer overflow in C# using an unsigned right shift? I want alwaysPositive to be assigned a positive number with all possible values for lareValue1 and largeValue2 (these are at least 1).
The following statement causes a buffer overflow:
int alwaysPositive = (largeValue1 + largeValue2) / 2;
I know I can prevent it by substracting and adding:
int alwaysPositive = largeValue1 + ((largeValue2 - largeValue1) / 2);
But in other programming languages I can use an unsigned bitshift to do the trick:
int alwaysPositive3 = (largeValue1 + largeValue2) >>> 1;
How can I do this in C#?
The answers below all solve the problem. There are probably lots of ways to do this, but they all (including my solutions) have one thing in common: they all look obfuscated.
A: int alwaysPositive = (largeValue1 >> 1) + (largeValue2 >> 1) + (largeValue1 & largeValue2 & 0x01);
The idea behind the above is that if you pre-divide the results before you add them, then you will avoid the overflow as both high-order bits will be unset. You then add some slight correction logic to increase the value by one if both were positive (round down). If you only care if either one was positive (round off) then you could change it to
int alwaysPositive = (largeValue1 >> 1) + (largeValue2 >> 1) + ((largeValue1 | largeValue2) & 0x01);
A: unchecked((largeValue1 + largeValue2) >> 1) is another option.
See the documentation for the unchecked keyword.
A: You can do it this way:
x = largeValue1;
y = largeValue2;
return (x&y)+((x^y)/2);
That's a bit-twiddling way to get the average of two integers without overflow.
If you want you can replace the division by two with a bit-shift, but the compiler will do that for you anyways.
A: You could use uints:
uint alwaysPositive = (uint)(largeValue1 + largeValue2) / 2;
A: Not to nitpick, but you mean "integer overflow" rather than "buffer overflow".
I don't know C#, so there may be another way, but you could mimic an unsigned shift by just masking off the top bit: (x >> 1) & 0x80000000
A: try
{
checked { alwaysPositive3 = (largeValue1 + largeValue2); }
}
catch (OverflowException ex)
{
// Corrective logic
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do you turn a dynamic site into a static site that can be demo'd from a CD? I need to find a way to crawl one of our company's web applications and create a static site from it that can be burned to a cd and used by traveling sales people to demo the web site. The back end data store is spread across many, many systems so simply running the site on a VM on the sale person's laptop won't work. And they won't have access to the internet while at some clients (no internet, cell phone....primitive, I know).
Does anyone have any good recommendations for crawlers that can handle things like link cleanup, flash, a little ajax, css, etc? I know odds are slim, but I figured I'd throw the question out here before I jump into writing my own tool.
A: wget or curl can both recursively follow links and mirror an entire site, so that might be a good bet. You won't be able to use truly interactive parts of the site, like search engines, or anything that modifies the data, thoguh.
Is it possible at all to create dummy backend services that can run from the sales folks' laptops, that the app can interface with?
A: By using a WebCrawler, e.g. one of these:
*
*DataparkSearch is a crawler and search engine released under the GNU General Public License.
*GNU Wget is a command-line operated crawler written in C and released under the GPL. It is typically used to mirror web and FTP sites.
*HTTrack uses a Web crawler to create a mirror of a web site for off-line viewing. It is written in C and released under the GPL.
*ICDL Crawler is a cross-platform web crawler written in C++ and intended to crawl websites based on Website Parse Templates using computer's free CPU resources only.
*JSpider is a highly configurable and customizable web spider engine released under the GPL.
*Larbin by Sebastien Ailleret
*Webtools4larbin by Andreas Beder
*Methabot is a speed-optimized web crawler and command line utility written in C and released under a 2-clause BSD License. It features a wide configuration system, a module system and has support for targeted crawling through local filesystem, HTTP or FTP.
*Jaeksoft WebSearch is a web crawler and indexer build over Apache Lucene. It is released under the GPL v3 license.
*Nutch is a crawler written in Java and released under an Apache License. It can be used in conjunction with the Lucene text indexing package.
*Pavuk is a command line web mirror tool with optional X11 GUI crawler and released under the GPL. It has bunch of advanced features compared to wget and httrack, eg. regular expression based filtering and file creation rules.
*WebVac is a crawler used by the Stanford WebBase Project.
*WebSPHINX (Miller and Bharat, 1998) is composed of a Java class library that implements multi-threaded web page retrieval and HTML parsing, and a graphical user interface to set the starting URLs, to extract the downloaded data and to implement a basic text-based search engine.
*WIRE - Web Information Retrieval Environment [15] is a web crawler written in C++ and released under the GPL, including several policies for scheduling the page downloads and a module for generating reports and statistics on the downloaded pages so it has been used for web characterization.
*LWP::RobotUA (Langheinrich , 2004) is a Perl class for implementing well-behaved parallel web robots distributed under Perl 5's license.
*Web Crawler Open source web crawler class for .NET (written in C#).
*Sherlock Holmes Sherlock Holmes gathers and indexes textual data (text files, web pages, ...), both locally and over the network. Holmes is sponsored and commercially used by the Czech web portal Centrum. It is also used by Onet.pl.
*YaCy, a free distributed search engine, built on principles of peer-to-peer networks (licensed under GPL).
*Ruya Ruya is an Open Source, high performance breadth-first, level-based web crawler. It is used to crawl English and Japanese websites in a well-behaved manner. It is released under the GPL and is written entirely in the Python language. A SingleDomainDelayCrawler implementation obeys robots.txt with a crawl delay.
*Universal Information Crawler Fast developing web crawler. Crawls Saves and analyzes the data.
*Agent Kernel A Java framework for schedule, thread, and storage management when crawling.
*Spider News, Information regarding building a spider in perl.
*Arachnode.NET, is an open source promiscuous Web crawler for downloading, indexing and storing Internet content including e-mail addresses, files, hyperlinks, images, and Web pages. Arachnode.net is written in C# using SQL Server 2005 and is released under the GPL.
*dine is a multithreaded Java HTTP client/crawler that can be programmed in JavaScript released under the LGPL.
*Crawljax is an Ajax crawler based on a method which dynamically builds a `state-flow graph' modeling the various navigation paths and states within an Ajax application. Crawljax is written in Java and released under the BSD License.
A: Just because nobody copy pasted a working command ... I am trying ... ten years later. :D
wget --mirror --convert-links --adjust-extension --page-requisites \
--no-parent http://example.org
It worked like a charm for me.
A: You're not going to be able to handle things like AJAX requests without burning a webserver to the CD, which I understand you have already said is impossible.
wget will download the site for you (use the -r parameter for "recursive"), but any dynamic content like reports and so on of course will not work properly, you'll just get a single snapshot.
A: If you do end up having to run it off of a webserver, you might want to take a look at:
ServerToGo
It lets you run a WAMPP stack off of a CD, complete with mysql/php/apache support. The db's are copied to the current users temp directory on launch, and can be run entirely without the user installing anything!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: BAML Decompiler / Viewer Could anyone recommend a good BAML Decompiler / Viewer besides BAML Viewer plugin for Reflector, which doesn't handle path geometry/data?
A: You can try this one by Cristian Ricciolo Civera.
I did not want to use its ClickOnce installer, but the CodePlex site provides a zip file for download.
I had to place Ricciolo.StylesExplorer.exe and Ricciolo.StylesExplorer.MarkupReflection.dll into GAC to make it work. I guess that is what the installation does in the first place.
A: BAML source can be viewed in
ILSpy.
Just load your compiled managed code, find and click on the *.baml file you will see the source.
A: You might like to have another look at the BAML addin for reflector as it's been recently updated by Andrew Smith. Have a look at his at blog post you'll note that he has fixed the issue with path data.
A: styles explorer
Haven't tried it myself yet but worth a try
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Filtering out duplicate values at runtime in a sql database - set based I have a database issue that i currently cannot wrap my head around with an easy solution. In my db I have a table that stores event values.. 0's and 1's with a timestamp. Issue being that it is possible for there to be the same event to occur twice as a business rule. Like below
*
*'2008-09-22 16:28:14.133', 0
*'2008-09-22 16:28:35.233', 1
*'2008-09-22 16:29:16.353', 1
*'2008-09-22 16:31:37.273', 0
*'2008-09-22 16:35:43.134', 0
*'2008-09-22 16:36:39.633', 1
*'2008-09-22 16:41:40.733', 0
in real life these events are cycled and I’m trying to query over to get the cycles of these but I need to ignore the duplicate values ( 1,1 ) the current solution is using a SQL cursor to loop each and throw out the value if the previous was the same. I’ve considered using a trigger on the insert to clean up in a post processed table but I can’t think of an easy solution to do this set based.
Any ideas or suggestions?
Thanks
A: (preface.......i've only done this in oracle, but I'm pretty sure if the db supports triggers it's all possible)
Have a before insert trigger that selects the row with the max timestamp value. If that row's value is the same as the one you wish to insert, ignore it.
This should keep them all in a correct state.
Now, if you need both sets of states stored, the trigger can always insert on the all-inclusive table, but do the lookup and insert on the 'filtered' table only when the value changes.
A: Just so that I understand the problem.
You have, if you order the row set based on the timestamp, sometimes duplicate values occuring next to each other, like the above pair of 1's in the 2nd and 3rt item? and then you have double 0's in the 4th and 5th, is that it?
And you want the last of the corresponding pair (or sequence if there are more than 2)?
Why do you need to remove them? I'm asking because unless they occupy a significant share of the size of this table, it might be easier to filter them out like you do sequentially when you need to process or display them.
A solution, though not a very good one, would be to retrieve the minimum timestamp above the timestamp of the current row you're examining, and then retrieve the value from that, and if it's the same, don't return the current row.
Here's the SQL to get everything:
SELECT timestamp, value
FROM yourtable
And here's how to join in to get the minimum timestamp above the current one:
SELECT T1.timestamp, MIN(T2.timestamp) AS next_timestamp, T1.value
FROM yourtable T1, yourtable T2
WHERE T2.timestamp > T1.timestamp
GROUP BY T1.timestamp, T1.value
(I fear the above query will be horribly slow)
And then to retrieve the value corresponding to that minimum timestamp
SELECT T3.timestamp, T3.value
FROM (
SELECT T1.timestamp, MIN(T2.timestamp) AS next_timestamp, T1.value
FROM yourtable T1, yourtable T2
WHERE T2.timestamp > T1.timestamp
GROUP BY T1.timestamp, T1.value
) T3, yourtable AS T4
WHERE T3.next_timestamp = T4.timestamp
AND T3.value <> T4.value
Unfortunately this doesn't produce the last value, as it needs a following value to compare against. A simple dummy sentinel-value (you can union that in if you need to) will handle that.
Here's the sqlite database dump I tested the above query against:
BEGIN TRANSACTION;
CREATE TABLE yourtable (timestamp datetime, value int);
INSERT INTO "yourtable" VALUES('2008-09-22 16:28:14.133',0);
INSERT INTO "yourtable" VALUES('2008-09-22 16:28:35.233',1);
INSERT INTO "yourtable" VALUES('2008-09-22 16:29:16.353',1);
INSERT INTO "yourtable" VALUES('2008-09-22 16:31:37.273',0);
INSERT INTO "yourtable" VALUES('2008-09-22 16:35:43.134',0);
INSERT INTO "yourtable" VALUES('2008-09-22 16:36:39.633',1);
INSERT INTO "yourtable" VALUES('2008-09-22 16:41:40.733',0);
INSERT INTO "yourtable" VALUES('2099-12-31 23:59:59.999',2);
COMMIT;
And here is the (formatted) output:
timestamp value
2008-09-22 16:28:14.133 0
2008-09-22 16:29:16.353 1
2008-09-22 16:35:43.134 0
2008-09-22 16:36:39.633 1
2008-09-22 16:41:40.733 0
A: This problem is really a data capture problem. A typical database engine is not a good choice to solve it. A simple preprocessor should detect the change in the input data set and store only the relevant data (time stamp, etc.).
An easy solution is in a database environment (for example in Oracle) to create a package which can have local memory variables for storing last input data set and eliminate unneeded database access.
Of course you can use all the power of the database environment to define the "change in input data set" and store the filtered data. So it could be easy or complex as you whish.
A: This uses a SQL Server Common Table Expression, but it can be inlined, with table t with columns dt and cyclestate:
;WITH Firsts AS (
SELECT t1.dt
,MIN(t2.dt) AS Prevdt
FROM t AS t1
INNER JOIN t AS t2
ON t1.dt < t2.dt
AND t2.cyclestate <> t1.cyclestate
GROUP BY t1.dt
)
SELECT MIN(t1.dt) AS dt_start
,t2.dt AS dt_end
FROM t AS t1
INNER JOIN Firsts
ON t1.dt = Firsts.dt
INNER JOIN t AS t2
ON t2.dt = Firsts.Prevdt
AND t1.cyclestate <> t2.cyclestate
GROUP BY t2.dt
,t2.cyclestate
HAVING MIN(t1.cyclestate) = 0
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to sniff a USB port under Windows? From time to time, I need to dump USB traffic under Windows, mostly to support hardware under Linux, so my primary goal is to produce dump files for protocol analysis.
For USB traffic, it seems that SniffUsb is the clear winner... It works under Windows XP (but not later) and has a much nicer GUI than earlier versions. It produces huge dump files, but everything is there.
However, my device is in fact a USB serial device, so I turned to Portmon which can sniff serial port traffic without the USB overhead.
A: Busdog, an open source project hosted on github, has worked well for me. It has a driver it installs to allow it to monitor USB communications. The config window allows you to reinstall or remove the device at any time.
You can select the USB device you want from an enumerated list. A nice feature is to have it automatically trace a new device that is plugged in:
Data communications to and from an SWR analyzer I was reverse engineering were captured flawlessly:
A: USBSnoop works too - and is free.
Or, you could buy a USB to Ethernet converter and use whatever network sniffer you prefer to see the data.
A: Personally, I'd use QEMU or KVM and instrument their USB passthrough code, and then use libusb to prototype the replacement driver in user space (this latter bit I've done before; writing USB device drivers in Python is fun!).
A: Microsoft Message Analyzer was able to capture USB traffic, with Device and Log File parser from MS: link
Update: as mentioned by @facetus, MS Message Analyzer has been retired on November 25 2019.
A: After five years waiting, now it's possible to sniff usb packets on windows
See http://desowin.org/usbpcap/tour.html for a quick tour. It works pretty well
A: *
*Since people don't seem to realize it, Wireshark does monitor USB traffic and has a parser for it; but the catch is it only works under Linux. Wireshark on Windows will not do this.
*It may be possible to plug the USB device you want to monitor, along with a Linux machine (with Wireshark running) and your Windows machine and just use the USB device under Windows.
*Problem with the above? I don't know how the Linux machine or the Windows machine will detect each other.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: What's the Name of the Python Module that Formats arbitrary Text to nicely looking HTML? A while ago I came across a Python library that formats regular text to HTML similar to Markdown, reStructuredText and Textile, just that it had no syntax at all. It detected indentatations, quotes, links and newlines/paragraphs only.
Unfortunately I lost the name of the library and was unable to Google it. Anyone any ideas?
Edit: reStructuredText aka rst == docutils. That's not what I'm looking for :)
A: Okay. I found it now. It's called PottyMouth.
A: Markdown in python is a python implementation of the perl based markdown utility.
Markown converts various forms of structured text to valid html, and one of the supported forms is just plain ascii. Use is pretty straight forward.
python markdown.py input_file.txt > output_file.html
Markdown can be easily called as a module too:
import markdown
html = markdown.markdown(your_text_string)
A: Sphinx is a documentation generator using reStructuredText. It's quite nice, although I haven't used it personally.
The website Hazel Tree, which compiles python text uses Sphinx, and so does the new Python documentation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How can I update Perl on Windows without losing modules? At work I'm using Perl 5.8.0 on Windows.
When I first put Perl on, I went to CPAN, downloaded all the sources, made a few changes (in the .MAK file(?) to support threads, or things like that), and did nmake / nmake test / nmake install. Then, bit by bit, I've downloaded individual modules from CPAN and done the nmake dance.
So, I'd like to upgrade to a more recent version, but the new one must not break any existing scripts. Notably, a bunch of "use" modules that I've installed must be installed in the new version.
What's the most reliable (and easiest) way to update my current version, ensuring that everything I've done with the nmake dance will still be there after updating?
A: As others noted, start by installing the new perl in a separate place. I have several perls installed, each completely separate from all of the others.
To do that, you'll have to configure and compile the sources yourself. When you run configure, you'll get a chance to specify the installer. I gave detailed instructions for this in an "Compiling My Own Perl" in the Spring 2008 issue of The Perl Review. There's also an Item in Effective Perl Programming that shows you how to do it.
Now, go back to your original distribution and run cpan -a to create an autobundle file. This is a Pod document that lists all of the extra stuff you've installed, and CPAN.pm understands how to use that to reinstall everything.
To install things in the new perl, use that perl's path to start CPAN.pm and install the autobundle file you created. CPAN.pm will get the right installation paths from that perl's configuration.
Watch the output to make sure things go well. This process won't install the same versions of the modules, but the latest versions.
As for Strawberry Perl, there's a "portable" version you can install somewhere besides the default location. That way you could have the new perl on removable media. You can test it anywhere you like without disturbing the local installation. I don't think that's quite ready for general use though. The Berrybrew tool might help you manage that.
Good luck, :)
A: I would seriously consider looking at using Strawberry Perl.
A: You can install a second version of Perl in a different location. You'll have to re-install any non-core modules into the new version. In general, different versions of Perl are not binary compatible, which could be an issue if you have any program-specific libraries that utilize XS components. Pure Perl modules shouldn't be affected.
A: If you stay within the 5.8 track, all installed modules that contain XS (binary) extensions will continue to work, as binary compatibility is guaranteed within the same 5.8 series. If you moved to 5.10 then you would have to recompile any modules that contain XS components.
All you need to do is ensure that the new build lists the previous include directories in its @INC array (which is used to look for modules).
By the sounds of it, I think you're on Windows, in which case the current @INC paths can be viewed with
perl -le "print for @INC"
Make sure you target your new Perl version in another directory. It will happily coexist
with the previous version, and this will allow you to choose which Perl installation gets used; it's just a question of getting your PATH order sorted out. As soon as a Perl interpreter is started up, it knows where to look for the rest of its modules.
Strawberry Perl is probably the nicest distribution on Windows these days for rolling your own.
A: When I did it I installed the newer one into a separate directory. There's a bit of added confusion running two versions, but it definitely helps make sure everything's working first, and provides a quick way of switching back to the old one in a pinch. I also set up Apache to run two separate services, so I could monkey around with the newer Perl in one service without touching the production one on the old Perl.
It's probably a lot wiser, in hindsight, to install on a separate computer, and do your testing there. Record every configuration change you need to make.
I am not sure about building it yourself—I always just used prepackaged binaries for Windows.
I'm not sure I understand exactly what you're asking. Do you have a list of changes you made to the 5.8 makefile? Or is the question how to obtain such a list? Are you also asking how to find out which packages above the base install you've obtained from CPAN? Are you also asking how to test that your custom changes won't break those packages if you get them from CPAN again?
A: I think the answer to this involves virtualisation of some kind:
*
*Set up an exact copy of your current live machine. Upgrade Perl, using the same directory locations and structures as you're using at the moment.
*Go through your scripts testing them on the new image.
*Once you're happy, flip the switch.
The thinking behind this is that there's probably all sorts of subtle dependencies and assumptions you haven't thought of. While unlikely, the latest version of a particular module (possibly even a core module, although that's even more unlikely) might have a subtle difference compared to the one you were using. Unless you've exhaustively gone through your entire codebase, there's quite possibly a particular module that's required only under certain circumstances.
You can try and spot this by building a list of all your scripts - a list that you should have anyway, by dint of all your code being under version control (you are using version control, e.g. Subversion, yes?) - and iterating through it, running perl -c on each script. e.g. this script. That sort of automated test is invaluable: you can set it running, go away for a coffee or whatever, and come back to check whether everything worked. The first few times you'll probably find an obscure module that you'd forgotten about, which is fine: the whole point of automating this is so that you don't have to do the drudge-work of checking every single script.
A: Why don't you use ActivePerl and its "ppm" tool to (re)install modules?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: SVN Versioning: How to have for each project its own revision scheme? Just a small SVN "problem" here.
I setup my own SVN server Setting up Subversion on Windows
Now I made a rep in which all my projects will go.
Now, I checked the rep out in a folder called "Projects".
Now If I make a project and check it in, that project is revision 1. If I make a second project, and check it in, that Project is at revision 2. Thus, if I make a change to Project 1, that project will then be at Revision 3.
What I would really want is for each project to have its own revision scheme. How do I do this?
A: The only way is to have each project in a completely separate repository. Items within the same repository will always exhibit the behavior you mentioned in your question.
From Here
Unlike those of many other version control systems, Subversion's revision numbers apply to entire trees, not individual files. Each revision number selects an entire tree, a particular state of the repository after some committed change.
A: You have to create a separate repository for each project. This is in general a good idea anyways so no downside there :)
A: Can you describe why having a single subversion revision number across multiple projects is a problem for you?
There are some legitimate advantages to using a single repository for all of your projects. The biggest one being that you're probably better able to control changes between common code in multiple projects.
If you have a problem with the concept of a single incrementing subversion revision number across multiple projects now, have you considered the situation where you branch one of your projects? (remembering that a normal branch will also have a globally incrementing subversion revision number)
It sounds like you're trying to use the repository revision number as part of the build or release number?
If that's the case perhaps you could consider implementing a different build numbering scheme for your project/s that can then be associated with the subversion revision number.
Such an association can be made by using a convention of creating branches with the release number and putting the subversion revision in the comment for the branch.
Some schemes were discussed in this question
A: You need to create repositories inside your "Projects" folder, and when you do the initial checkout, checkout "???/projects/repo1"... this will keep the working copies separate on your machine, and you will check in/out completely separately of each other.
A: They discuss it a little better here: http://www.nabble.com/Multiple-Repositories-in-a-Windows-Server-td15014106.html
Basically you can do:
svnserve -r /path/to/repository
svn://hostname/
or
svnserve -r /path/to/directory/containing/many/repositories
svn://hostname/repositoryname/
Alternately, you could go server'less and just host the individual repositories on a local or networked drive.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How can I best create a SharePoint list view that shows only root folder contents? I have a custom SharePoint list definition that is based on the Document content type. My library instance that I create from this contains many HTML documents, and each of those documents has some images saved into a subfolder inside this library. What's the best way to create a view that will show all of those top-level HTML documents without showing all of the image accessory files? I'd prefer to define this within the schema.xml file for my list definition.
A: I believe adding Scope="FilesOnly" to the View tag in your list definition should do the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Does anyone have a good article or good advice for class naming for n-tier web applications? I'm used to the layout that LLBLGen gives when it generates objects based on a database structure, which might generate the following class files for a given "User" table in the database:
/EntityClasses/UserEntity.vb
/CollectionClasses/UserCollection.vb
This provides some base functionality for data access. However, when you want to implement business logic on top of that, how are you laying things out? For example, given a table structure that might look like this:
USER
userId
firstName
lastName
username
password
lockedOut
What if you wanted to lock out a user? What code would you call from the presentation layer? Would you instantiate the UserEntity class, and do:
User = new UserEntity(userId)
User.lockedOut = true
User.Save()
Or would you create a new class, such as UserHelper (/BusinessLogic/UserHelper.cs), which might have a LockOutUser function. That would change the code to be:
UH = new UserHelper()
UH.LockOutUser(userId)
Or would you extend the base UserEntity class, and create UserEntityExt that adds the new functionality? Therefore, the code from the presentation layer might look like:
User = new UserEntityExt(userId)
User.LockOutUser()
Or... would you do something else altogether?
And what would your directory/namespace structure and file/class naming conventions be?
A: I think what you are looking for is a service layer which would sit on top of the domain objects. You essentially have this with your second option although I might call it UserService or UserTasks. By encapsulating this LockUser process in a single place it will be easy to change later when there might be more steps or other domain objects involved. Also, this would be the place to implement transactions when dealing with multiple database calls.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why am I losing data when using a vxWorks pipe? I am using pipes to transfer information between two vxWorks tasks.
Here is a code sample:
Init()
{
fd = open("/pipe/mydev", O_RDWR, 0777);
...
}
taskRx()
{
...
len = read(fd, rxbuf, MAX_RX_LEN);
...
}
taskTx()
{
...
len = write(fd, txbuf, txLen);
...
}
If we send a message that is longer than MAX_RX_LEN, (ie txLen > MAX_RX_LEN) we do 2 reads to get the remainder of the message.
What we noticed is that the 2nd read didn't receive any data!
Why is that?
A: VxWorks' pipe mechanism is not stream based (unlike unix named pipes).
It is a layer on top of the vxWorks message Queue facility. As such, it has the same limitations as a message queue: when reading from the pipe, you are really reading the entire message. If your receive buffer does not have enough space to store the received data, the overflow is simply discarded.
When doing a receive on a message Queue or a pipe, always make sure the buffer is set to the maximum size of a queue element.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Good alternative to GenuineChannels for .net remoting We have been using GenuineChannels in our product for the last 4 years. GenuineChannels now appears to have become unsupported and the main developer guy Dmitri has I think joined Microsoft. I have the source as part of the product but not the networking / .net knowledge to support it.
Has anyone found a good alternative to GenuineChannels for good .net remoting? I need our software product to run on top of some supported software!
A: WCF, Windows Communication Foundation is the approach you should look into. WCF integrates web services, .net remoting, distributed transactions and message queuing into a common communications model.
Getting started with WCF is at http://msdn.microsoft.com/en-us/library/ms734712.aspx
The tutorial is at http://msdn.microsoft.com/en-us/library/ms734712.aspx
Oreilly has a book about WCF: http://oreilly.com/catalog/9780596526993/
A: There is a product called DotNetRemoting that claims to be an easy replacement for GenuineChannels. They even have a section of their forum dedicated to converting from GenuineChannles to DNR.
A: WCF (Windows Communication Foundation) is the way to go - much more scalable, easily swappable to other technologies (if you need in the future), and builtin to .NET 3.0.
A lot of other nice stuff there, too... http://msdn.microsoft.com/en-us/netframework/aa663324.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Create custom code snippet in Visual Studio 2008 Like the title says, how do you create custom code snippets in Visual Studio 2008?
A: The MSDN links are nice, but sometimes I prefer simple tutorials.
A: Tools->Code Snippets Manager
To get your list of directories. Select (or Add) My Code Snippets.
The snippets themselves have to be created as separate files with a .snippet extension.
Here is a tutorial on using and creating them: Code Snippets in Visual Studio
A: This was just released too: http://codeplex.com/SnippetDesigner
The Snippet Designer is a plugin which enhances the Visual Studio IDE to allow a richer and more productive code snippet experience...
Features
A Snippet editor integrated inside of the IDE which supports C#, Visual Basic, JavaScript, HTML, XML and SQL
*
*Access it by opening any .snippet file or going to File -> New -> File -> Code Snippet File
*It uses the native Visual Studio code editor so that you can write the snippets in the same enviorment you write your code.
*It lets you easily mark replacements by a convenient right click menu.
*It displays properties of the snippet inside the Visual Studio properties window...
A Snippet Explorer tool window to search snippets on your computer.
*
*It is located under View -> Other Windows -> Snippet Explorer
*This tool window contains a code preview window which lets to peek inside the snippet to see what it is without opening the file.
*Maintains an index of snippets on your computer for quick searching.
*Provides a quick way to find a code snippet to use, edit or delete...
A: Here's a link to a utility for Creating/editing Snippets. It works for more languages than just VB despite the classification in the link.
http://msdn.microsoft.com/en-us/vbasic/bb973770.aspx
A: See official write-up here:
http://msdn.microsoft.com/en-us/library/ms165392.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Joining other tables in oracle tree queries Given a simple (id, description) table t1, such as
id description
-- -----------
1 Alice
2 Bob
3 Carol
4 David
5 Erica
6 Fred
And a parent-child relationship table t2, such as
parent child
------ -----
1 2
1 3
4 5
5 6
Oracle offers a way of traversing this as a tree with some custom syntax extensions:
select parent, child, sys_connect_by_path(child, '/') as "path"
from t2
connect by prior parent = child
The exact syntax is not important, and I've probably made a mistake in the above. The
important thing is that the above will produce something that looks like
parent child path
------ ----- ----
1 2 /1/2
1 3 /1/3
4 5 /4/5
4 6 /4/5/6
5 6 /5/6
My question is this: is it possible to join another table within the sys_connect_by_path(), such as the t1 table above, to produce something like:
parent child path
------ ----- ----
1 2 /Alice/Bob
1 3 /Alice/Carol
... and so on...
A: In your query, replace T2 with a subquery that joins T1 and T2, and returns parent, child and child description. Then in the sys_connect_by_path function, reference the child description from your subquery.
A: Based on Mike McAllister's idea, the following uses a derived table to achieve the desired result:
select
T.PARENT
,T.CHILD
,sys_connect_by_path(T.CDESC, '/')
from
(
select
t2.parent as PARENT
,t2.child as CHILD
,t1.description as CDESC
from
t1, t2
where
t2.child = t1.id
) T
where
level > 1 and connect_by_isleaf = 1
connect by prior
T.CHILD = T.PARENT
In my problem, all the parents are anchored under a "super-parent" root, which means that the paths can be fully described with SYS_CONNECT_BY_PATH, thereby obviating the need for cagcowboy's technique of concatenating the parent with the path.
A: SELECT parent, child, parents.description||sys_connect_by_path(childs.description, '/') AS "path"
FROM T1 parents, T1 childs, T2
WHERE T2.parent = parents.id
AND T2.child = childs.id
CONNECT BY PRIOR parent = child
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How do I use timezones with a datetime object in python? How do I properly represent a different timezone in my timezone? The below example only works because I know that EDT is one hour ahead of me, so I can uncomment the subtraction of myTimeZone()
import datetime, re
from datetime import tzinfo
class myTimeZone(tzinfo):
"""docstring for myTimeZone"""
def utfoffset(self, dt):
return timedelta(hours=1)
def myDateHandler(aDateString):
"""u'Sat, 6 Sep 2008 21:16:33 EDT'"""
_my_date_pattern = re.compile(r'\w+\,\s+(\d+)\s+(\w+)\s+(\d+)\s+(\d+)\:(\d+)\:(\d+)')
day, month, year, hour, minute, second = _my_date_pattern.search(aDateString).groups()
month = [
'JAN', 'FEB', 'MAR',
'APR', 'MAY', 'JUN',
'JUL', 'AUG', 'SEP',
'OCT', 'NOV', 'DEC'
].index(month.upper()) + 1
dt = datetime.datetime(
int(year), int(month), int(day),
int(hour), int(minute), int(second)
)
# dt = dt - datetime.timedelta(hours=1)
# dt = dt - dt.tzinfo.utfoffset(myTimeZone())
return (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, 0, 0, 0)
def main():
print myDateHandler("Sat, 6 Sep 2008 21:16:33 EDT")
if __name__ == '__main__':
main()
A: For the current local timezone, you can you use:
>>> import time
>>> offset = time.timezone if (time.localtime().tm_isdst == 0) else time.altzone
>>> offset / 60 / 60 * -1
-9
The value returned is in seconds West of UTC (with areas East of UTC getting a negative value). This is the opposite to how we'd actually like it, hence the * -1.
localtime().tm_isdst will be zero if daylight savings is currently not in effect (although this may not be correct if an area has recently changed their daylight savings law).
A: I recommend babel and pytz when working with timezones. Keep your internal datetime objects naive and in UTC and convert to your timezone for formatting only. The reason why you probably want naive objects (objects without timezone information) is that many libraries and database adapters have no idea about timezones.
*
*Babel
*pytz
A: Python >= 3.9
Python comes with zoneinfo as part of the standard lib. Example usage:
from datetime import datetime, timezone
from zoneinfo import ZoneInfo
UTC = datetime(2012,11,10,9,0,0, tzinfo=timezone.utc)
# convert to another tz with "astimezone":
eastern = UTC.astimezone(ZoneInfo("US/Eastern"))
# note that it is safe to use "replace",
# to get the same wall time in a different tz:
pacific = eastern.replace(tzinfo=ZoneInfo("US/Pacific"))
print(UTC.isoformat())
print(eastern.isoformat())
print(pacific.isoformat())
# 2012-11-10T09:00:00+00:00
# 2012-11-10T04:00:00-05:00
# 2012-11-10T04:00:00-08:00
Also note this section from the docs:
The zoneinfo module does not directly provide time zone data, and instead pulls time zone information from the system time zone database or the first-party PyPI package tzdata, if available.
So don't forget to call a pip install tzdata, on Windows at least.
A: The Python standard library doesn't contain timezone information, because unfortunately timezone data changes a lot faster than Python. You need a third-party module for this; the usual choice is pytz
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
}
|
Q: How do I clear the cache of Ruby Phusion Passenger in Ubuntu? I've tried restarting apache and the cached pages still appear, so there must be a folder someplace. I don't have a 'public/cache', so what other places should I look? Is there a URL flag that could trigger this effect as well?
A: You need to touch a file to have phusion clear out, like:
touch /webapps/mycook/tmp/restart.txt
See the docs
A: I'm in development mode.
I discovered I had to restart the apache service and clear the browser cache to get my changes to appear 100% of the time.
A: First, have you cleared your browsers cache? You can do this through the browser menus, in options somewhere.
The next question I would ask is are you running the app in production or development mode?
A: For those who come here nowadays
Now there is more abilities to restart passanger from ssh
Here is the updated doc
passenger-config restart-app
or
passenger-config restart-app /Users/phusion/testapp
and the old way which can be done if you had only FTP access
touch tmp/restart.txt
this one will be done not immediately, passanger will look for its modified timestamp to be changed to perform restart.
A: Ctrl+F5 will forcibly reload the page, and all it's linked assets.
It's kind of like clearing the cache for just that one page.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Scriptaculous Droppables onDrop callback, How to reference each element? Documentation can be found here
It says in the example:
onDrop:
Called whenever a Draggable is released over the Droppable and the Droppable is accepts it. The callback gets three parameters: the Draggable element, the Droppable element and the Event. You can extract additional information about the drop – like if the Ctrl or Shift keys were pressed – from the Event object.
Then it gives some code
Droppables.add('shopping_cart', {
accept: 'products',
onDrop: function(element) {
$('shopping_cart_text').update('Dropped the ' + element.alt + ' on me.');
}
});
It uses the ambiguous word 'element' in the code. My question is, does anyone have a good example on how to reference the draggable element and the droppable element in this callback javascript function?
A: Going by the example further down the page, the callback function can take as many of the parameters as you need:
onDrop: function() { $('droppable_demo').highlight(); }
In this case, they have used none of the callback parameters. To this end, presumably to access all three as mentioned in the paragraph you quoted, you can define:
onDrop: function(dragged, dropped, event) { }
A: I added this example to the documentation...
Droppables.add('shopping_cart', {
accept: 'products',
onDrop: function(dragged, dropped, event) {
alert('Dragged: ' + dragged.id);
alert('Dropped onto: ' + dropped.id);
alert('Held ctrl key: ' + event.ctrlKey);
}
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: javascript: cancel all kinds of requests My website makes a lot of requests. I often need to cancel all current requests, so that the browser is not blocking relevant new requests.
I have 3 kinds of requests:
*
*Ajax
*inserted script-tags (which do JSONP-Communication)
*inserted image-tags (which cause the browser to request data from various servers)
For Ajax its no problem as the XMLHttpRequest object supports canceling.
What I need is a way to make any browser stop loading resources, from DOM-Objects.
Looks like simply removing an object (eg. an image-tag) from the DOM only helps avoiding an request, if the request is not already running.
UPDATE: a way to cancel all requests, which are irrelevant, instead of really any request would be perfect.
A: window.stop() should cancel any pending image or script requests.
A: I think document.close() stops all requests, but I'm not so sure about it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Inner join across multiple access db's I am re-designing an application for a ASP.NET CMS that I really don't like. I have made som improvements in performance only to discover that not only does this CMS use MS SQL but some users "simply" use MS Access database.
The problem is that I have some tables which I inner join, that with the MS Access version are in two different files. I am not allowed to simply move the tables to the other mdb file.
I am now trying to figure out a good way to "inner join" across multiple access db files?
It would really be a pity if I have fetch all the data and the do it programmatically!
Thanks
A: If you have access to the MDBs, and are able to change them, you might consider using Linked Tables. Access provides the ability to link to external data (in other MDBs, in Excel files, even in SQL Server or Oracle), and then you can perform your joins against the links.
I'd strongly encourage performance testing such an option. If it's feasible to migrate users of the Access databases to another system (even SQL Express), that would also be preferable -- last I checked, there are no 64-bit JET drivers for ODBC anymore, so if the app is ever hosted in a 64-bit environment, these users will be hosed.
A: You don't need linked tables at all. There are two approaches to using data from different MDBs that can be used without a linked table. The first is to use "IN 'c:\MyDBs\Access.mdb'" in the FROM clause of your SQL. One of your saved queries would be like:
SELECT MyTable.*
FROM MyTable IN 'c:\MyDBs\Access.mdb'
and the other saved query would be:
SELECT OtherTable.*
FROM OtherTable IN 'c:\MyDBs\Other.mdb'
You could then save those queries, and then use the saved queries to join the two tables.
Alternatively, you can manage it all in a single SQL statement by specifying the path to the source MDB for each table in the FROM clause thus:
SELECT MyTable.ID, OtherTable.OtherField
FROM [c:\MyDBs\Access.mdb].MyTable
INNER JOIN [c:\MyDBs\Other.mdb].OtherTable ON MyTable.ID = OtherTable.ID
Keep one thing in mind, though:
The Jet query optimizer won't necessarily be able to use the indexes from these tables for the join (whether it will use them for criteria on individual fields is another question), so this could be extremely slow (in my tests, it's not, but I'm not using big datasets to test). But that performance issue applies to linked tables, too.
A: Inside one access DB you can create "linked tables" that point to the other DB. You should (I think) be able to query the tables as if they both existed in the same DB.
It does mean you have to change one of the DBs to create the virtual table, but at least you're not actually moving the data, just making a pointer to it
A: Within Access, you can add remote tables through the "Linked Table Manager". You could add the links to one Access file or the other, or you could create a new Access file that references the tables in both files. After this is done, the inner-join queries are no different than doing them in a single database.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Best approach for sortable table with a lot of data On our web application, the search results are displayed in sortable tables. The user can click on any column and sort the result. The problem is some times, the user does a broad search and gets a lot of data returned. To make the sortable part work, you probably need all the results, which takes a long time. Or I can retrieve few results at a time, but then sorting won't really work well. What's the best practice to display sortable tables that might contain lots of data?
Thanks for all the advises. I will certainly going over these.
We are using an existing Javascript framework that has the sortable table; "lots" of results means hundreds. The problem is that our users are at some remote site and a lot of delay is the network time to send/receive data from the data center. Sorting the data at the database side and only send one page worth of results at a time is nice; but when the user clicks some column header, another round trip is done, which always add 3-4 seconds.
Well, I guess that might be the network team's problem :)
A: You should be doing paging back on the database server. E.g. on SQL 2005 and SQL 2008 there are paging techniques. I'd suggest looking at paging options for whatever system you're looking at.
A: Using sorting paging at the database level is the correct answer. If your query returns 1000 rows, but you're only going to show the user 10 of them, there is no need for the other 990 to be sent across the network.
Here is a mysql example. Say you need 10 rows, 21-30, from the 'people' table:
SELECT * FROM people LIMIT 21, 10
A: What database are you using as there some good paging option in SQL 2005 and upwards using ROW_NUMBER to allow you to do paging on the server. I found this good one on Christian Darie's blog
eg This procedure which is used to page products in a category. You just pass in the pagenumber you want and the number of products on the page etc
CREATE PROCEDURE GetProductsInCategory
(@CategoryID INT,
@DescriptionLength INT,
@PageNumber INT,
@ProductsPerPage INT,
@HowManyProducts INT OUTPUT)
AS
-- declare a new TABLE variable
DECLARE @Products TABLE
(RowNumber INT,
ProductID INT,
Name VARCHAR(50),
Description VARCHAR(5000),
Price MONEY,
Image1FileName VARCHAR(50),
Image2FileName VARCHAR(50),
OnDepartmentPromotion BIT,
OnCatalogPromotion BIT)
-- populate the table variable with the complete list of products
INSERT INTO @Products
SELECT ROW_NUMBER() OVER (ORDER BY Product.ProductID),
Product.ProductID, Name,
SUBSTRING(Description, 1, @DescriptionLength) + '...' AS Description,
Price, Image1FileName, Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM Product INNER JOIN ProductCategory
ON Product.ProductID = ProductCategory.ProductID
WHERE ProductCategory.CategoryID = @CategoryID
-- return the total number of products using an OUTPUT variable
SELECT @HowManyProducts = COUNT(ProductID) FROM @Products
-- extract the requested page of products
SELECT ProductID, Name, Description, Price, Image1FileName,
Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM @Products
WHERE RowNumber > (@PageNumber - 1) * @ProductsPerPage
AND RowNumber <= @PageNumber * @ProductsPerPage
A: You could do the sorting on the server. AJAX would eliminate the necessity of a full refresh, but there'd still be a delay. Sides, databases a generally very fast at sorting.
A: For these situations I employ techniques on the SQL Server side that not only leverage the database for the sorting, but also use custom paging to ONLY return the specific records needed.
It is a bit of a pain to implemement at first, but the performance is amazing afterwards!
A: How large is "a lot" of data? Hundreds of rows? Thousands?
Sorting can be done via JavaScript painlessly with Mochikit Sortable Tables. However, if the data takes a long time to sort (most likely a second or two [or three!]) then you may want to give the user some visual cue that soming is happening and the page didn't just freeze. For example, tint the screen (a la Lightbox) and display a "sorting" animation or text.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to authenticate in a Facebook Flash application? I do not understand how I can authenticate a user in a Facebook Flash application. I have read from the documentation that Facebook platform gives arguments, such as user id and session secret, to the Flash application, but how can I inside the Flash application make sure that these arguments are correct and not fake? Should I make some call to some Facebook platform method and check that no error occurs to be sure?
Another question related to Facebook: Can I only store Facebook users' user id's in my own database and not for example users' names? I guess I am not allowed to show users' real names when they are visiting my website directly and not through Facebook?
A: Use this API for flash/flex communication to Facebook services:
http://code.google.com/p/facebook-actionscript-api/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: IIS CSS Caching When we are developing new sites or testing changes in new ones that involve css after the new code is committed and someone goes to check the changes they always see a cached version of the old css. This is causing a lot of problems in testing because people never are sure if they have the latest css on screen (I know shift and clicking refresh clears this cache but I can't expect end users to know to do this). What are my possible solutions?
A: If you're serving your CSS from static files (or anything that the query string doesn't matter for), try varying that to ensure that the browser makes a fresh request, as it will think that it's pulling a completley different resource, so have for example:
"styles.css?token=1234" in the CSS reference in your markup and change the value of "token" on each CSS check-in
A: In your development environment, set the Expires header much lower. In your Production environment, set it higher, and then set it low about a week before you do your release.
A: Its not a great solution, but I've gotten around this before at the page level by adding a querystring to the end of the call to the CSS file:
<link href="/css/global.css?id=3939" type="text/css" rel="stylesheet" />
Then I'd randomize the id value so that it always loads a different value on page load. Then I'd take this code out before pushing to production. I suppose you could also pull the value from a config file, so that it only has to be loaded once per commit.
A: Similar (a bit more detail) answers given for the JavaScript version of this question, which has the same problem/solution
Help with aggressive JavaScript caching
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: What can Services do under Windows? Does anyone have a good guide to capabilities of Windows Services under XP? In particular, I am trying to find out what happens when a program being run as a service tries to open windows, but hasn't been given permission to interact with the desktop.
Basically, I have a program that is/was a GUI application, that should be able to run as a service for long term background processing. Rewriting the program to not display the GUI elements when doing background processing is a major effort, so I'd like to see if there is just a way to ignore the UI elements. It is sort of working now, as long as too many windows aren't opened. I'm trying to figure out what limits I might be running into. Ideally, there would be an MSDN page that discusses this, but I've had no luck finding one yet.
A: Generally, services should be designed to not have any visible UI. The entire point of a service is to run in the background, without UI, unattended. (Think SQL Server, IIS, etc.)
In most scenarios, a separate application controls the service's operation, should a GUI be needed. (Continuing the samples I just mentioned, SQL Server Management Studio, IIS Manager, etc.) These separate applications configure and manipulate the service (and occasionally, if needed, bounce said service).
If your service requires occasional UI, and said UI can't be isolated to a control app, then you probably should reconsider the fact that you're using a service to begin with. Perhaps a UI application which resides in the system notification area is the right pattern to use? (E.G., Windows Live Communicator.)
A: A service in Microsoft Windows is a program that runs whenever the computer is running the operating system. It does not require a user to be logged on. Services are needed to perform user-independent tasks such as directory replication, process monitoring, or services to other machines on a network, such as support for the Internet HTTP protocol
Usually it is implemented as a console application that runs in the background and performs tasks that don't require user interaction.
The installed services can be configured through the Services applet, available from
Control Panel --> Administrative Tools in Windows 2000/XP.
Services can be configured to start automatically when operating system starts, so you dont have to start each of them manually after a system reboot.
*
*Creating a Simple Service - MSDN Article
*Writing Windows Services Made easy - Code Project Article
*Five Steps to Writing Windows Services in C - DevX Article
A: If you should be thinking of eventually migrating to a newer OS such as Vista or Server 2008, you will find that you cannot give a service permission to interact with the desktop at all. Therefore, from the point of view of forwards compatibility, you should design your service to not require it.
A: A service in Windows XP can interact with the Desktop if its "Allow Service to Interact with Desktop" property (MMC -> service properties -> Log On tab) is checked. It is also possible to do so by doing the following:
hWinstation = OpenWindowStation("winsta0", FALSE, MAXIMUM_ALLOWED);
SetProcessWindowStation(hWinstation);
hDesktop = OpenDesktop("default", 0, FALSE, MAXIMUM_ALLOWED);
SetThreadDesktop(hDesk);
But be aware that presenting UI from a service process in Windows XP will almost always lead to a security problem (see Shatter attack). You should try to break out the UI part of your application from the service.
A: Usually the service won't have permission to write to the window station and desktop, so it will fail; even running applications that load user32.dll can fail simply because user32 has initialization code that wants to talk to the window station and can't get access to it unless the service is running as an administrator.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can Mercurial be integrated into Visual Studio 2008? Is there a tool to integrate Mercurial into Visual Studio?
I am just curious. Mercurial is pretty nice, even with 'just' TortoiseHG, but integration with Visual Studio would be better.
A: VisualHG
Way to go, me. (Found the answer myself afterwards. Oh well, someone might find this useful.)
A: There is also HgSccPackage, which does not need TortoiseHG to work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Delegates as parameters in VB.NET Backstory: I'm using log4net to handle all logging for a project I'm working on. One particular method can be called under several different circumstances -- some that warrant the log messages to be errors and others that warrant the log messages to be warnings.
So, as an example, how could I turn
Public Sub CheckDifference(ByVal A As Integer, ByVal B As Integer)
If (B - A) > 5 Then
log.ErrorFormat("Difference ({0}) is outside of acceptable range.", (B - A))
End If
End Sub
Into something more along the lines of:
Public Sub CheckDifference(ByVal A As Integer, ByVal B As Integer, "Some delegate info here")
If (B - A) > 5 Then
**delegateinfo**.Invoke("Difference ({0}) is outside of acceptable range.", (B - A))
End If
End Sub
So that I could call it and pass either log.ErrorFormat or log.WarnFormat as the delegate?
I'm using VB.NET with VS 2008 and .NET 3.5 SP1. Also, I'm fairly new to delegates in general, so if this question should be worded differently to remove any ambiguities, let me know.
EDIT: Also, how could I initialize the delegate to either the ErrorFormat or the WarnFormat in the class constructor? Would it be as easy as myDelegate = log.ErrorFormat? I would imagine there is more to it than that (pardon my ignorance on the subject -- delegates are really something I want to learn more about, but so far they have eluded my understanding).
A: Declare your delegate signature:
Public Delegate Sub Format(ByVal value As String)
Define your Test function:
Public Sub CheckDifference(ByVal A As Integer, _
ByVal B As Integer, _
ByVal format As Format)
If (B - A) > 5 Then
format.Invoke(String.Format( _
"Difference ({0}) is outside of acceptable range.", (B - A)))
End If
End Sub
Somewhere in your code call your Test function:
CheckDifference(Foo, Bar, AddressOf log.WriteWarn)
Or
CheckDifference(Foo, Bar, AddressOf log.WriteError)
A: You'll first want to declare a delegate at the Class/Module level (all this code is from memory/not tested):
Private Delegate Sub LogErrorDelegate(txt as string, byval paramarray fields() as string)
Then .. you'll want to declare it as a property to your class e.g.
Private _LogError
Public Property LogError as LogErrorDelegate
Get
Return _LogError
End Get
Set(value as LogErrorDelegate)
_LogError = value
End Set
End Property
The way to instantiate the delegate is:
Dim led as New LogErrorDelegate(AddressOf log.ErrorFormat)
A: Public Delegate errorCall(ByVal error As String, Params objs As Objects())
CheckDifference(10, 0, AddressOf log.ErrorFormat)
Please forgive the formatting :P
Basically though, create the delegate that you want, with the correct signature, and pass the address of it to the method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Install tool to create virtual directory on IIS What install tool can I use to create Virtual Directory on IIS? OpenSource, free or to do in C#.
A: WiX can create IIS virtual directories.
A: You can create IIS virtual directorys using NAnt and the MKIISdir task in the NAntContrib project
A: You can use the VS.NET Web Setup Project included into visual studio.
Check this article for a list of deployment possiblities for your web app :
A: You could use Microsoft Web Deployment Tool to recreate a website structure.
A: you can call a small VBScript from any installer via command line, and access the IIS object.
Set objMimeMap = GetObject("IIS://localhost/w3svc")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to remove duplicate elements from an xml file? I have an XML file like
<ns0:Employees xmlns:ns0="http://TestIndexMap.Employees">
<Employee FirstName="FirstName_0" LastName="LastName_1" dept="dept_2" empNumber="1">
<Schedules>
<Schedule Date_join="2008-01-20" Date_end="2008-01-30" />
</Schedules>
</Employee>
<Employee FirstName="FirstName_0" LastName="LastName_1" dept="dept_2" empNumber="2">
<Schedules>
<Schedule Date_join="2008-01-20" Date_end="2008-01-30" />
</Schedules>
</Employee>
<Employee FirstName="FirstName_2" LastName="LastName_1" dept="dept_2" empNumber="2">
<Schedules>
<Schedule Date_join="2007-01-21" Date_end="2007-12-30" />
</Schedules>
</Employee>
<Employee FirstName="FirstName_2" LastName="LastName_1" dept="dept_2" empNumber="2">
<Schedules>
<Schedule Date_join="2007-01-21" Date_end="2007-12-30" />
<Schedule Date_join="2008-06-20" Date_end="2008-01-30" />
</Schedules>
</Employee>
</ns0:Employees>
I would want to remove the duplicates based on the fistname, last name and date_join and data_end .
Please, can someone explain how to achive this with XSLT?
A: Here are some samples of how to remove duplicates based on element name and id field. It should be not too hard to extend this to arbitrary fields.
Q: Expansion. A part of my xml looks
like this:
<location>
<state>xxxx</state>
</location>
<location>
<state>yyyy</state>
</location>
<location>
<state>xxxx</state>
</location>
The desired output is:
xxxx
yyyy
That is, duplicate values of state should not be printed.
Can this be done?
<xsl:variable name="unique-list"
select="//state[not(.=following::state)]" />
<xsl:for-each select="$unique-list">
<xsl:value-of select="." />
</xsl:for-each>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I create my custom Shell Context Handlers for Windows? Problem
Language: C# 2.0 or later
I would like to register context handlers to create menues when the user right clicks certain files (in my case *.eic). What is the procedure to register, unregister (clean up) and handle events (clicks) from these menues?
I have a clue it's something to do with the windows registry, but considering how much stuff there is in .net, I wouldn't be surprised if there are handy methods to do this clean and easy.
Code snippets, website references, comments are all good. Please toss them at me.
Update
Obviously there is a slight problem creating context menues in managed languages, as several users have commented. Is there any other preferred way of achieving the same behaviour, or should I spend time looking into these workarounds? I don't mind doing that at all, I'm glad people have put effort into making this possible - but I still want to know if there is a "proper/clean" way of achieving this.
A: Resist writing Shell Extensions in managed languages - there are a multitude of things that could go bang if you pursue this route.
Have a browse through this thread for more details. It contains links to do it if really want, and sagely advice of why it can be done, but shouldn't.
http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/1428326d-7950-42b4-ad94-8e962124043e/
You're back to unmanaged C/C++ as your only real tools here.
A: This is not a good idea because of potential dependency issues between different versions of the .NET Framework. Your shell extension could be expecting one version, while a different version may have already been loaded by the application that's currently running.
This thread contains a good summary of the situation.
A: I've done them before in C#. It ends up being a hell of a lot harder than it should be. Once you get the boilerplate code down, though, it is easy to roll out new items. I followed this link:
Link To Info
A: While others already mentioned that writing shell extensions in pure .NET is a bad idea due to framework conflicts, you should still note that:
*
*There are 3rd party drivers out there (see Eldos or LogicNP) that do the unmanaged side for you, allowing you to write managed code that talks to the native driver, thus preventing shell-related CLR version conflicts.
*A recent MSDN article mentioned that Microsoft has solved this problem for the CoreCLR, as used by Silverlight. They've accomplished this by allowing multiple versions of the CLR to run in the same process, thus fixing the problem. The author further stated that this fix in Silverlight will be rolled into future versions of the full CLR. (Meaning, in the future, it will be quite feasible to write shell extensions in managed code.)
A: As the prior comments mention, it isn't the best idea to write shell extensions in managed languages, but I thought I'd share an Open Source project that is doing just that :)
ShellGlue is a managed shell extension that is actually quite helpful. The source also might be helpful to you if you're interested in pursuing writing a shell extension in C/C++.
A: Aside from the caveats that have been mentioned concerning the implementation of shell extensions in managed code, what you'd basically need to do is the following:
First, create a COM component in C# that implements the IShellExtInit IContextMenu interfaces. How to create COM components in C# is described here. How to implement the necessary interfaces is described in this article. While the description is for a C++ implementation, you can apply that knowledge to you C# version.
Your COM component will have GUID called the Class-ID or CLSID. You need to register that ID with your file type as a context-menu shell extension:
HKEY_CLASSES_ROOT\.eic\ShellEx\ContextMenuHandlers\MyShellExt
(Default) -> {YOUR-COMPONENTS-CLSID}
Also make sure that you registered your component correctly as described in the C# COM tutorial. You should find it in the registry under
HKEY_CLASSES_ROOT\CLSID\{YOUR-COMPONENTS-CLSID}
InprocServer32
(Default) -> C:\WINDOWS\system32\mscoree.dll
Class -> YourImplClass
assembly -> YourAssembly, version=..., Culture=neutral, PublicKey=...
...
Good luck...
A: As others have pointed out, shell extensions are not practical in windows development currently.
I asked a similar question recently which was answered with a link to a guide to do exactly what I wanted to do
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: jquery: fastest DOM insertion? I got this bad feeling about how I insert larger amounts of HTML.
Lets assume we got:
var html="<table>..<a-lot-of-other-tags />..</table>"
and I want to put this into
$("#mydiv")
previously I did something like
var html_obj = $(html);
$("#mydiv").append(html_obj);
Is it correct that jQuery is parsing html to create DOM-Objects ? Well this is what I read somewhere (UPDATE: I meant that I have read, jQuery parses the html to create the whole DOM tree by hand - its nonsense right?!), so I changed my code:
$("#mydiv").attr("innerHTML", $("#mydiv").attr("innerHTML") + html);
Feels faster, is it ? And is it correct that this is equivalent to:
document.getElementById("mydiv").innerHTML += html ? or is jquery doing some additional expensive stuff in the background ?
Would love to learn alternatives as well.
A: What are you attempting to avoid? "A bad feeling" is incredibly vague. If you have heard "the DOM is slow" and decided to "avoid the DOM", then this is impossible. Every method of inserting code into a page, including innerHTML, will result in DOM objects being created. The DOM is the representation of the document in your browser's memory. You want DOM objects to be created.
The reason why people say "the DOM is slow" is because creating elements with document.createElement(), which is the official DOM interface for creating elements, is slower than using the non-standard innerHTML property in some browsers. This doesn't mean that creating DOM objects is bad, it is necessary to create DOM objects, otherwise your code wouldn't do anything at all.
A: Try the following:
$("#mydiv").append(html);
The other answers, including the accepted answer, are slower by 2-10x: jsperf.
The accepted answer does not work in IE 6, 7, and 8 because you can't set innerHTML of a <table> element, due to a bug in IE: jsbin.
A: innerHTML is remarkably fast, and in many cases you will get the best results just setting that (I would just use append).
However, if there is much already in "mydiv" then you are forcing the browser to parse and render all of that content again (everything that was there before, plus all of your new content). You can avoid this by appending a document fragment onto "mydiv" instead:
var frag = document.createDocumentFragment();
frag.innerHTML = html;
$("#mydiv").append(frag);
In this way, only your new content gets parsed (unavoidable) and the existing content does not.
EDIT: My bad... I've discovered that innerHTML isn't well supported on document fragments. You can use the same technique with any node type. For your example, you could create the root table node and insert the innerHTML into that:
var frag = document.createElement('table');
frag.innerHTML = tableInnerHtml;
$("#mydiv").append(frag);
A: The answer about using a DOM fragment is on the right track. If you have a bunch of html objects that you are constant inserting into the DOM then you will see some speed improvements using the fragment. This post by John Resig explains it pretty well:
http://ejohn.org/blog/dom-documentfragments/
A: The fastest way to append items
The fastest way to append to the DOM tree is to buffer all of your append in to a single DOM fragment, then append the dom fragment to the dom.
This is the method I use in my game engine.
//Returns a new Buffer object
function Buffer() {
//the framgment
var domFragment = document.createDocumentFragment();
//Adds a node to the dom fragment
function add(node) {
domFragment.appendChild(node);
}
//Flushes the buffer to a node
function flush(targetNode) {
//if the target node is not given then use the body
var targetNode = targetNode || document.body;
//append the domFragment to the target
targetNode.appendChild(domFragment);
}
//return the buffer
return {
"add": add,
"flush": flush
}
}
//to make a buffer do this
var buffer = Buffer();
//to add elements to the buffer do the following
buffer.add(someNode1);
//continue to add elements to the buffer
buffer.add(someNode2);
buffer.add(someNode3);
buffer.add(someNode4);
buffer.add(someN...);
//when you are done adding nodes flush the nodes to the containing div in the dom
buffer.flush(myContainerNode);
Using this object i am able to render ~1000 items to the screen ~40 times a second in firefox 4.
Here's a use case.
A: For starters, write a script that times how long it takes to do it 100 or 1,000 times with each method.
To make sure the repeats aren't somehow optimized away--I'm no expert on JavaScript engines--vary the html you're inserting every time, say by putting '0001' then '0002' then '0003' in a certain cell of the table.
A: I create a giant string with and then append this string with jquery.
Works good and fast, for me.
A: You mention being interested in alternatives. If you look at the listing of DOM-related jQuery plugins you'll find several that are dedicated to programatically generating DOM trees. See for instance SuperFlyDom or DOM Elements Creator; but there are others.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Hyperlinking an image using CSS I know this is probably the dumbest question ever, however I am a total beginner when it comes to CSS; how do you hyperlink an image on a webpage using an image which is sourced from CSS? I am trying to set the title image on my website linkable to the frontpage. Thanks!
Edit: Just to make it clear, I'm sourcing my image from CSS, the CSS code for the header div is as follows:-
#header
{
width: 1000px;
margin: 0px auto;
padding: 0px 15px 0px 15px;
border: none;
background: url(images/title.png) no-repeat bottom;
width: 1000px;
height: 100px;
}
I want to know how to make this div hyperlinked on my webpage without having to make it an anchor rather than a div.
A: To link a css-sourced background-image:
#header {
display:block;
margin: 0px auto;
padding: 0px 15px 0px 15px;
border: none;
background: url(images/title.png) no-repeat bottom;
width: 1000px;
height: 100px;
}
<a id="header" href="blah.html" class="linkedImage">
The key thing here is to turn the anchor tag into a block element, so height and width work. Otherwise it's an inline element and will ignore height.
A: That's really not a CSS thing. You still need your A tag to make that work. (But use CSS to make sure the image border is either removed, or designed to your required spec.)
<a href="index.html"><img src="foo" class="whatever" alt="foo alt" /></a>
EDIT: Taking original intent (updated question) into account, a new code sample is below:
<a href="index.html"><img id="header" alt="foo alt" /></a>
You're still in an HTML world for links, as described by other answers on this question.
A: sorry to spoil your fun ladies and gentlemen, it is possible.
Write in your header: [link](http://"link here")
then in your css:
#header a[href="https://link here"] {
display: inline-block;
width: 75px;
height: 75px;
font-size: 0;
}
.side .md a[href="link here"] {
background: url(%%picture here%%) no-repeat;
}
A: You still create links in HTML with 'a' (anchor) tags just like normal. CSS does not have anything that can specify if something is a link to somewhere or not.
Edit
The comments of mine and others still apply. To clarify, you can use JavaScript to make a div act as a link:
<div id="header" onclick="window.location='http://google.com';">My Header</div>
That isn't really great for usability however as people without JavaScript enabled will be unable to click that and have it act as a link.
Also, you may want to add a cursor: pointer; line to your CSS to give the header div the correct mouse cursor for a link.
A: <a href="linkto_title_page.html" class="titleLink"></a>
then in your css
.titleLink {
background-image: url(imageUrl);
}
A: You control design and styles with CSS, not the behavior of your content.
You're going to have to use something like <a id="header" href="[your link]">Logo</a> and then have a CSS block such as:
a#header {
background-image: url(...);
display: block;
width: ..;
height: ...;
}
You cannot nest a div inside <a> and still have 'valid' code. <a> is an inline element that cannot legally contain a block element. The only non-Javascript way to make a link is with the <a> element.
You can nest your <a> tag inside <div> and then put your image inside :)
If you don't want that, you're going to have to use JavaScript to make your <div> clickable:
Document.getElementById("header").onclick = function() {
window.location='...';
}
A: CSS is for presentation only, not content. A link is content and should be put into the HTML of the site using a standard <a href=""> tag. You can then style this link (or add an image to the link) using CSS.
A: HTML is the only way to create links - it defines the structure and content of a web site.
CSS stands for Cascading Style Sheets - it only affects how things look.
Although normally an <a/>; tag is the only way to create a link, you can make a <div/> clickable with JavaScript. I'd use jQuery:
$("div#header").click(function() {window.location=XXXXXX;});
A: You have to use an anchor element, wrapped in a container. On your homepage, your title would normally be an h1, but then on content pages it would probably change to a div. You should also always have text in the anchor element for people without CSS support and/or screen readers. The easiest way to hide that is through CSS. Here are both examples:
<h1 id="title"><a title="Home" href="index.html>My Title</a></h1>
<div id="title"><a title="Home" href="index.html>My Title</a></div>
and the CSS:
#title {
position:relative; /*Makes this a containing element*/
}
#title a {
background: transparent url(../images/logo.png) no-repeat scroll 0 0;
display:block;
text-indent:-9999px; /*Hides the anchor text*/
height:50px; /*Set height and width to the exact size of your image*/
width:200px;
}
Depending on the rest of your stylesheet you may need to adjus it for the h1 to make it look the same as the div, check out CSS Resets for possible solutions to this.
A: Try this - use an H1 as the seat of your graphic instead. Saved my butt time and time again:
<h1 class="technique-six">
CSS-Tricks
</h1>
h1.technique-six {
width: 350px;
padding: 75px 0 0 0;
height: 0;
background: url("images/header-image.jpg") no-repeat;
overflow: hidden;
}
Accessible, and also solid across browsers IE6 and > . You could also link the H1.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Wait until any of Future is done I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished).
Currently they are presented as Future, so I need something like
/**
* Blocks current thread until one of specified futures is done and returns it.
*/
public static <T> Future<T> waitForAny(Collection<Future<T>> futures)
throws AllFuturesFailedException
Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance.
I could try using
new CountDownLatch(1)
and decrease countdown when task is complete and do
countdown.await()
, but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override
<T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable)
and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures.
All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like
WaitHandle.WaitAny(WaitHandle[] waitHandles)
in c#. Are there any well known solutions for such kind of problem?
UPDATE:
Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine.
Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
A: ExecutorService.invokeAny
A: Why not just create a results queue and wait on the queue? Or more simply, use a CompletionService since that's what it is: an ExecutorService + result queue.
A: This is actually pretty easy with wait() and notifyAll().
First, define a lock object. (You can use any class for this, but I like to be explicit):
package com.javadude.sample;
public class Lock {}
Next, define your worker thread. He must notify that lock object when he's finished with his processing. Note that the notify must be in a synchronized block locking on the lock object.
package com.javadude.sample;
public class Worker extends Thread {
private Lock lock_;
private long timeToSleep_;
private String name_;
public Worker(Lock lock, String name, long timeToSleep) {
lock_ = lock;
timeToSleep_ = timeToSleep;
name_ = name;
}
@Override
public void run() {
// do real work -- using a sleep here to simulate work
try {
sleep(timeToSleep_);
} catch (InterruptedException e) {
interrupt();
}
System.out.println(name_ + " is done... notifying");
// notify whoever is waiting, in this case, the client
synchronized (lock_) {
lock_.notify();
}
}
}
Finally, you can write your client:
package com.javadude.sample;
public class Client {
public static void main(String[] args) {
Lock lock = new Lock();
Worker worker1 = new Worker(lock, "worker1", 15000);
Worker worker2 = new Worker(lock, "worker2", 10000);
Worker worker3 = new Worker(lock, "worker3", 5000);
Worker worker4 = new Worker(lock, "worker4", 20000);
boolean started = false;
int numNotifies = 0;
while (true) {
synchronized (lock) {
try {
if (!started) {
// need to do the start here so we grab the lock, just
// in case one of the threads is fast -- if we had done the
// starts outside the synchronized block, a fast thread could
// get to its notification *before* the client is waiting for it
worker1.start();
worker2.start();
worker3.start();
worker4.start();
started = true;
}
lock.wait();
} catch (InterruptedException e) {
break;
}
numNotifies++;
if (numNotifies == 4) {
break;
}
System.out.println("Notified!");
}
}
System.out.println("Everyone has notified me... I'm done");
}
}
A: simple, check out ExecutorCompletionService.
A: As far as I know, Java has no analogous structure to the WaitHandle.WaitAny method.
It seems to me that this could be achieved through a "WaitableFuture" decorator:
public WaitableFuture<T>
extends Future<T>
{
private CountDownLatch countDownLatch;
WaitableFuture(CountDownLatch countDownLatch)
{
super();
this.countDownLatch = countDownLatch;
}
void doTask()
{
super.doTask();
this.countDownLatch.countDown();
}
}
Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new doTask() method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution.
Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources.
/**
* Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join().
*/
public static joinAny(Collection<Thread> threads, int numberToWaitFor)
{
CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor);
foreach(Thread thread in threads)
{
(new Thread(new JoinThreadHelper(thread, countDownLatch))).start();
}
countDownLatch.await();
}
class JoinThreadHelper
implements Runnable
{
Thread thread;
CountDownLatch countDownLatch;
JoinThreadHelper(Thread thread, CountDownLatch countDownLatch)
{
this.thread = thread;
this.countDownLatch = countDownLatch;
}
void run()
{
this.thread.join();
this.countDownLatch.countDown();
}
}
A: If you can use CompletableFutures instead then there is CompletableFuture.anyOf that does what you want, just call join on the result:
CompletableFuture.anyOf(futures).join()
You can use CompletableFutures with executors by calling the CompletableFuture.supplyAsync or runAsync methods.
A: Since you don't care which one finishes, why not just have a single WaitHandle for all threads and wait on that? Whichever one finishes first can set the handle.
A: See this option:
public class WaitForAnyRedux {
private static final int POOL_SIZE = 10;
public static <T> T waitForAny(Collection<T> collection) throws InterruptedException, ExecutionException {
List<Callable<T>> callables = new ArrayList<Callable<T>>();
for (final T t : collection) {
Callable<T> callable = Executors.callable(new Thread() {
@Override
public void run() {
synchronized (t) {
try {
t.wait();
} catch (InterruptedException e) {
}
}
}
}, t);
callables.add(callable);
}
BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(POOL_SIZE);
ExecutorService executorService = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 0, TimeUnit.SECONDS, queue);
return executorService.invokeAny(callables);
}
static public void main(String[] args) throws InterruptedException, ExecutionException {
final List<Integer> integers = new ArrayList<Integer>();
for (int i = 0; i < POOL_SIZE; i++) {
integers.add(i);
}
(new Thread() {
public void run() {
Integer notified = null;
try {
notified = waitForAny(integers);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("notified=" + notified);
}
}).start();
synchronized (integers) {
integers.wait(3000);
}
Integer randomInt = integers.get((new Random()).nextInt(POOL_SIZE));
System.out.println("Waking up " + randomInt);
synchronized (randomInt) {
randomInt.notify();
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
}
|
Q: How would you display an array of integers as a set of ranges? (algorithm) Given an array of integers, what is the simplest way to iterate over it and figure out all the ranges it covers? for example, for an array such as:
$numbers = array(1,3,4,5,6,8,11,12,14,15,16);
The ranges would be:
1,3-6,8,11-12,14-16
A: Here's a python implementation, it should be easy enough to follow
numbers = [1,3,4,5,6,8,11,12,14,15,16];
def is_predecessor(i1, i2):
if i1 == i2 - 1:
return True;
else:
return False;
def make_range(i1, i2):
if i1 == i2:
return str(i1);
else:
return str(i1) + "-" + str(i2);
previous_element = None;
current_start_element = None;
for number in numbers:
if not is_predecessor(previous_element, number):
if current_start_element is not None:
print make_range(current_start_element, previous_element);
current_start_element = number;
previous_element = number;
# handle last pair
if current_start_element is not None:
print make_range(current_start_element, previous_element);
This outputs:
1
3-6
8
11-12
14-16
I know, I know, it isn't an algorithm, but I found it harder to actually explain it without having indentation problems than to just implement a solution for it.
A: Here's a C# 3.0'y way of doing it:
Points of interest:
*
*automatic properties (public int Property { get; set; })
*using new object initializers (new Range { Begin = xxx; ... }
*using yield for lazy evaluation
*using linq extension methods (First() and Skip())
-
class Demo
{
private class Range
{
public int Begin { get; set; }
public int End { get; set; }
public override string ToString()
{
if (Begin == End)
return string.Format("{0}", Begin);
else
return string.Format("{0}-{1}", Begin, End);
}
}
static void Main(string[] args)
{
var list = new List<int> { 1, 3, 4, 5, 6, 8, 11, 12, 14, 15, 16 };
// list.Sort();
var ranges = GetRangesForSortedList(list);
PrintRange(ranges);
Console.Read();
}
private static void PrintRange(IEnumerable<Range> ranges)
{
if (ranges.Count() == 0)
return;
Console.Write("[{0}", ranges.First());
foreach (Range range in ranges.Skip(1))
{
Console.Write(", {0}", range);
}
Console.WriteLine("]");
}
private static IEnumerable<Range> GetRangesForSortedList(IList<int> sortedList)
{
if (sortedList.Count < 1)
yield break;
int firstItem = sortedList.First();
Range currentRange = new Range { Begin = firstItem, End = firstItem };
foreach (int item in sortedList.Skip(1))
{
if (item == currentRange.End + 1)
{
currentRange.End = item;
}
else
{
yield return currentRange;
currentRange = new Range { Begin = item, End = item };
}
}
yield return currentRange;
}
}
Cheers, David
A: first: sort
second: tokenise
then: print the first value and loop over subsequent ones. If the 'current' value is equal to the last value +1, skip it. Otherwise if you've skipped value print dash and the value, otherwise print a comma and repeat.
That should do. Unless you wanted me to code up your homework for you? :)
A: If the array is sorted, as in your example, I would define buckets containing a min and a max.
Initialize: Create a bucket with a min and a max equal to the first value.
Loop: Compare each value with the max of the current bucket. If it is equal to or 1 more than the current max, update the max. If it is more than 1 greater than the max, save the bucket to a list of buckets and start a new bucket.
At the end you will have a set of buckets with a min and a max in each. If the min is the same as the max, print a single number (ie: in your example, the first bucket would have a min and a max of 1). If they are different, print as a range.
Example implementation in lisp:
CL-USER> (defun print-ranges (integer-list)
(let ((sorted-list (sort integer-list #'<)))
(loop with buckets = ()
with current-bucket
for int in sorted-list
initially (setf current-bucket (cons (first sorted-list) (first sorted-list)))
do (cond ((= int (cdr current-bucket))) ; do nothing, this number is already in range
((= (1- int) (cdr current-bucket)) ; number is 1 higher--update bucket's max
(setf (cdr current-bucket) int))
(t
(push current-bucket buckets)
(setf current-bucket (cons int int)))) ; set up a new bucket
finally (push current-bucket buckets)
(loop for firstp = t then nil
for bucket in (nreverse buckets) do
(cond ((= (car bucket) (cdr bucket))
(format t "~:[,~;~]~D" firstp (car bucket)))
(t
(format t "~:[,~;~]~D-~D" firstp (car bucket) (cdr bucket))))))))
PRINT-RANGES
CL-USER> (print-ranges (list 1 3 4 5 6 8 11 12 14 15 16))
1,3-6,8,11-12,14-16
NIL
CL-USER>
Basically you end up with a list of things, where each thing has (lowest-in-bucket, highest-in-bucket). Those are your ranges.
If the list is not already sorted, sort it first.
A: If the array is sorted in ascending order, then the problem is easy. Define a Range structure or class, which has a beginning and an end. Then go through the array. If the current element is one more than the previous, update Range.end, otherwise create a new range with this element as Range.begin. Store the ranges to a dynamic array or a linked list. Or just print them out as you go.
If the array may not be sorted, then sort it first.
A: C (gcc)
It is similar to the Python's version.
void ranges(int n; int a[n], int n)
{
qsort(a, n, sizeof(*a), intcmp);
for (int i = 0; i < n; ++i) {
const int start = i;
while(i < n-1 and a[i] >= a[i+1]-1)
++i;
printf("%d", a[start]);
if (a[start] != a[i])
printf("-%d", a[i]);
if (i < n-1)
printf(",");
}
printf("\n");
}
Example:
/**
* $ gcc -std=c99 -Wall ranges.c -o ranges && ./ranges
*/
#include <iso646.h> // and
#include <stdio.h>
#include <stdlib.h>
#define T(args...) \
{ \
int array[] = {args}; \
ranges(array, sizeof(array) / sizeof(*array)); \
}
int intcmp(const void* a, const void* b)
{
const int *ai = a;
const int *bi = b;
if (*ai < *bi)
return -1;
else if (*ai > *bi)
return 1;
else
return 0;
}
int main(void)
{
T(1,3,4,5,6,8,11,12,14,15,16);
T();
T(1);
T(1, 2);
T(3, 1);
T(1, 3, 4);
T(1, 2, 4);
T(1, 1, 2, 4);
T(1, 2, 2, 4);
T(1, 2, 2, 3, 5, 5);
}
Output:
1,3-6,8,11-12,14-16
1
1-2
1,3
1,3-4
1-2,4
1-2,4
1-2,4
1-3,5
A: Assuming the list is ordered you could start at the end and keep subtracting the next one down. While the difference is 1, you're in a range. When it's not, you start a new range.
i.e
16-15 = 1
15-14 = 1
14-12 = 2, the range is 16-14 - start a new range.
A: startRange = array[0];
for(i=0;i<array.length;i++)
{
if (array[i + 1] - array[i] > 1)
{
endRange = array[i];
pushRangeOntoArray(startRange,endRange);
i++;
startRange = array[i]
// need to check for end of array here
}
}
A: Here's my Perl solution. Could be cleaner and faster, but it shows how it works:
# Just in case it's not sorted...
my @list = sort { $a <=> $b } ( 1, 3, 4, 5, 6, 8, 11, 12, 14, 15, 16 );
my $range = [ $list[0] ];
for(@list[1 .. $#list]) {
if($_ == $range->[-1] + 1) {
push @$range, $_;
}
else {
print $#$range ? $range->[0] . '-' . $range->[-1] : $range->[0], "\n";
$range = [ $_ ];
}
}
A: My solution in Java 1.5 would be:
public static List<String> getRanges(int... in) {
List<String> result = new ArrayList<String>();
int last = -1;
for (int i : in) {
if (i != (last + 1)) {
if (!result.isEmpty()) {
addRange(result, last);
}
result.add(String.valueOf(i));
}
last = i;
}
addRange(result, last);
return result;
}
private static void addRange(List<String> result, int last) {
int lastPosition = result.size() - 1;
String lastResult = result.get(lastPosition);
if (!lastResult.equals(String.valueOf(last))) {
result.set(lastPosition, lastResult + "-" + last);
}
}
public static void main(String[] args) {
List<String> ranges = getRanges(1, 3, 4, 5, 6, 8, 11, 12, 14, 15, 16);
System.out.println(ranges);
}
which outputs:
[1, 3-6, 8, 11-12, 14-16]
Greetz, GHad
A: I believe the mergeinfo property that was introduced to Subversion in the 1.5 release has a format that is the same as what you're asking for, so you could potentially go look through the source of Subversion to find out how they do it. I'd be surprised if its any different than the other suggestions that have already been posted here.
A: I will assume the array X() is pre-sorted (and if not, sort the array before-hand).
for each element of X() as $element (with $i as current array posistion)
add $element to end of array Y()
if (X($i) + 1 is less than X($i + 1)) AND ($i + 1 is not greater than sizeof(X())) then
append Y(1)."-".Y(sizeof(Y())) to end of Z()
unset Y()
end if
next
if anything remains in Y() append to end of Z()
well, that's how I would do it.
A: Create a simple range type which contains start / end of range values. Add a constructor which takes only one value and sets start = end = value. Maintain a stack of range objects, work your way through a sorted copy of the array, extend the top range or add a new range as appropriate. More specifically, when the value in the array is 1 + the end value for the range object on the to of the stack, increment the end value for that range, when it's not, push a new range (with start = end = value at index in array) onto the stack.
A: module Main where
ranges :: [Int] -> [[Int]]
ranges [] = []
ranges list@(x:xs) = let adj = adjacent list in
let len = length adj in
if length adj == 1
then [[x]] ++ ranges xs
else [[x,(last adj)]] ++ ranges (drop ((length adj) - 1) xs)
where adjacent [] = []
adjacent (x:xs) = if (xs /= []) && (x + 1) == head xs
then [x] ++ adjacent (xs)
else [x]
main = do putStrLn $ show $ ranges [1,2,3,4,5,6,8,11,12,14,15,16]
-- Output: [[1,6],[8],[11,12],[14,16]]
Here's my best shot in Haskell.
A: Perl 6
sub to_ranges( Int *@data ){
my @ranges;
OUTER: for @data -> $item {
for @ranges -> $range {
# short circuit if the $item is in a range
next OUTER if $range[0] <= $item <= $range[1];
given( $item ){
when( $range[0]-1 ){ $range[0] = $item }
when( $range[1]+1 ){ $range[1] = $item }
}
}
push @ranges, [$item,$item];
}
return @ranges;
}
A: Python (>= 2.6)
This version additionally handles duplicates and unsorted sequences.
from __future__ import print_function
def ranges(a):
a.sort()
i = 0
while i < len(a):
start = i
while i < len(a)-1 and a[i] >= a[i+1]-1:
i += 1
print(a[start] if a[start] == a[i] else "%d-%d" % (a[start], a[i]),
end="," if i < len(a)-1 else "\n")
i += 1
Example:
import random
r = range(10)
random.shuffle(r)
ranges(r)
ranges([1,3,4,5,6,8,11,12,14,15,16]);
ranges([])
ranges([1])
ranges([1, 2])
ranges([1, 3])
ranges([1, 3, 4])
ranges([1, 2, 4])
ranges([1, 1, 2, 4])
ranges([1, 2, 2, 4])
ranges([1, 2, 2, 3, 5, 5])
Output:
0-9
1,3-6,8,11-12,14-16
1
1-2
1,3
1,3-4
1-2,4
1-2,4
1-2,4
1-3,5
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How to initialize Pango under Win32? Having downloaded Pango and GLib from the GTK+ Project's Win32 downloads page and having created and configured a Win32 project under Visual Studio 2005 so it points to the proper lib and include directories, how do you initialize Pango for rendering to a Win32 window?
Should the first call be to pango_win32_get_context()? Calling that function causes the application to hang on that call, as the function never returns.
What should be the first call? What other calls are needed to initialize Pango for Win32 and render a simple text string? Are there any examples available online for rendering with Pango under Win32?
A: Pango is a GObject based library. As such, you need to make sure that the glib dynamic type system is initialized before using any of its functionality. This can be done by calling g_type_init() (either directly or indirectly via something like gtk_init()). Could this be your problem?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Any way to use Help.ShowHelp in Windows CE Core license build? I am developing a Compact Framework 3.5 application for Windows CE 6.0. Help.ShowHelp() throws a NotSupportedException. At windowsembedded.com I found this statement:
"Help is not included in the Core run-time because it requires a browser."
Is this the reason for the exception? If so, is there any recommended way to get around this issue or do I need to build my own help system?
A: ShowHelp uses the browser control (via webview.dll IIRC) under CE. The browser control is definitely part of the Professional CE license SKU (assessment tool here. There used to be a Word doc that listed every component, but I can't seem to locate a download for it), so you will have to roll your own Help framework if you intend to stay with a Core license. Options might be to use Opera or MiniMo as a browser, but you'd still have to roll all of the searh/help stuff.
Using RTF might be simpler.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: nonvirtual interface idiom for more than two levels of inheritance? The non-virtual interface idiom describes how the virtual methods are nonpublic customisation points, and public methods are nonvirtual to allow the base class to control at all times how the customisation points are called.
This is an elegant idiom and I like to use it, but how does it work if the derived class is a base class in itself
A: It works, because the derived class can override a private virtual function of a base class, even if the base class function overrides its base class function.
This is perfectly legal:
class Parent
{
public:
int foo() {return bar();} // the non-virtual public interface
private
virtual int bar();
};
class Child : public Parent
{
private:
virtual int bar(); // overrides Parent::bar()
};
class Grandchild : public Child
{
private:
virtual int bar(); // overrides Child::bar();
};
A: The derived class can decide for itself:
You can just override the method completely by implementing the virtual function.
You can augment the method by calling the 'middle' classes function at some point in your derived class method.
If that's not what you want, you need to set it up explicitly in the 'middle' class. I wouldn't though. If you find yourself desiring this, it probably means you didn't give the base class enough customization points.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Can you do LINQ-like queries in a language like Python or Boo? Take this simple C# LINQ query, and imagine that db.Numbers is an SQL table with one column Number:
var result =
from n in db.Numbers
where n.Number < 5
select n.Number;
This will run very efficiently in C#, because it generates an SQL query something like
select Number from Numbers where Number < 5
What it doesn't do is select all the numbers from the database, and then filter them in C#, as it might appear to do at first.
Python supports a similar syntax:
result = [n.Number for n in Numbers if n.Number < 5]
But it the if clause here does the filtering on the client side, rather than the server side, which is much less efficient.
Is there something as efficient as LINQ in Python? (I'm currently evaluating Python vs. IronPython vs. Boo, so an answer that works in any of those languages is fine.)
A: sqlsoup in sqlalchemy gives you the quickest solution in python I think if you want a clear(ish) one liner . Look at the page to see.
It should be something like...
result = [n.Number for n in db.Numbers.filter(db.Numbers.Number < 5).all()]
A: Look closely at SQLAlchemy. This can probably do much of what you want. It gives you Python syntax for plain-old SQL that runs on the server.
A: LINQ is a language feature of C# and VB.NET. It is a special syntax recognized by the compiler and treated specially. It is also dependent on another language feature called expression trees.
Expression trees are a little different in that they are not special syntax. They are written just like any other class instantiation, but the compiler does treat them specially under the covers by turning a lambda into an instantiation of a run-time abstract syntax tree. These can be manipulated at run-time to produce a command in another language (i.e. SQL).
The C# and VB.NET compilers take LINQ syntax, and turn it into lambdas, then pass those into expression tree instantiations. Then there are a bunch of framework classes that manipulate these trees to produce SQL. You can also find other libraries, both MS-produced and third party, that offer "LINQ providers", which basically pop a different AST processer in to produce something from the LINQ other than SQL.
So one obstacle to doing these things in another language is the question whether they support run-time AST building/manipulation. I don't know whether any implementations of Python or Boo do, but I haven't heard of any such features.
A: I believe that when IronPython 2.0 is complete, it will have LINQ support (see this thread for some example discussion). Right now you should be able to write something like:
Queryable.Select(Queryable.Where(someInputSequence, somePredicate), someFuncThatReturnsTheSequenceElement)
Something better might have made it into IronPython 2.0b4 - there's a lot of current discussion about how naming conflicts were handled.
A: Boo supports list generator expressions using the same syntax as python. For more information on that, check out the Boo documentation on Generator expressions and List comprehensions.
A: A key factor for LINQ is the ability of the compiler to generate expression trees.
I am using a macro in Nemerle that converts a given Nemerle expression into an Expression tree object.
I can then pass this to the Where/Select/etc extension methods on IQueryables.
It's not quite the syntax of C# and VB, but it's close enough for me.
I got the Nemerle macro via a link on this post:
http://groups.google.com/group/nemerle-dev/browse_thread/thread/99b9dcfe204a578e
It should be possible to create a similar macro for Boo. It's quite a bit of work however, given the large set of possible expressions you need to support.
Ayende has given a proof of concept here:
http://ayende.com/Blog/archive/2008/08/05/Ugly-Linq.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Spring JTA TransactionManager config: Supporting both Tomcat and JBoss I have a web application using JPA and JTA with Spring. I would like to support both JBoss and Tomcat. When running on JBoss, I'd like to use JBoss' own TransactionManager, and when running on Tomcat, I'd like to use JOTM.
I have both scenarios working, but I now find that I seem to need two separate Spring configurations for the two cases. With JOTM, I need to use Spring's JotmFactoryBean:
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction">
<bean class="org.springframework.transaction.jta.JotmFactoryBean"/>
</property>
</bean>
In JBoss, though, I just need to fetch "TransactionManager" from JNDI:
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager">
<bean class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="resourceRef" value="true" />
<property name="jndiName" value="TransactionManager" />
<property name="expectedType"
value="javax.transaction.TransactionManager" />
</bean>
</property>
</bean>
Is there a way to configure this so that the appropriate TransactionManager - JBoss or JOTM - is used, without the need for two different configuration files?
A: You can use PropertyConfigurerPlaceholder to inject bean references as well as simple values.
For example if you call your beans 'jotm' and 'jboss' then you could inject your TM like:
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE">
<property name="location" value="classpath:/path/to/application.properties"/>
</bean>
<bean id="jotm">...</bean>
<bean id="jboss">...</bean>
<bean id="bean-requiring-transaction-manager">
<property name="transactionManager" ref="${transaction.strategy}"/>
</bean>
Then you can swap transaction managers using
*
*transaction.strategy=jotm in a properties file
*-Dtransaction.strategy=jotm as a system property
This is one possible approach. See my blog for a more complete example.
Hope this helps.
A: If you are using Spring 2.5 you can use <tx:jta-transaction-manager/>. I have not used it with JBoss but it should work for you according to section 9.8 Application server-specific integration from the Spring reference manual.
A: The <tx:jta-transaction-manager/> approach will look for a transaction manager in several default locations listed here. If your JBoss transaction manager is not in one of those locations, I suggest you move it, if possible, or move it in Tomcat so that both containers have their TM in the same JNDI location.
A: I think you have missed the point of JNDI. JNDI was pretty much written to solve the problem you have!
I think you can take it up a level, so instead of using the "userTransaction" or "transactionManager from JNDI" depending on your situation. Why not add the "JtaTransactionManager" to JNDI. That way you push the configuration to the JNDI where it is supposed to be instead of creating even more configuration files [ like there aren't enough already ;) ].
A: Just adding my experience here so I don't have to re-suffer the experience again.
As bmatthews68, Chochos and these posters have said, use <tx:jta-transaction-manager/> in your Spring bean file; it definitely provides the appropriate level of abstraction and there's no need to do anything extra on the Spring side.
As for Tomcat, I declared <Transaction factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60" /> in the default/shared conf/context.xml file, which binds to java:comp/UserTransaction. As this is one of the places searched for by Spring, you shouldn't need to do anything else.
One gotcha though: if like me you use Maven, make sure you exclude any dependencies on the javax.transaction:jta jar or set the scope to provided. Otherwise you will experience classloader issues.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Can you recommend a Windows based Network emulator? I'm looking for a Windows network emulator that can emulate the packet-loss & latency characteristics of a 3G wireless network.
I used to use a tool from GIPs that has been end-of-life'd. I've also tried Shunra Virtual Enterprise but found it a bit too basic
A: dummynet is now available for windows and linux as well. See
http://info.iet.unipi.it/~luigi/dummynet/
A: There's a FreeBSD tool called dummynet that can do this. Since you have a Windows setup, you could put it on a separate box and route through it for testing, or perhaps even run it on a VM on your Windows machine. I know of no Windows solution to this problem, but perhaps others will.
A: Have you tried LANforge ICE for Windows? http://www.candelatech.com/lanforge_v3/datasheet.html#ice
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Getting a char* from a _variant_t in optimal time Here's the code I want to speed up. It's getting a value from an ADO recordset and converting it to a char*. But this is slow. Can I skip the creation of the _bstr_t?
_variant_t var = pRs->Fields->GetItem(i)->GetValue();
if (V_VT(&var) == VT_BSTR)
{
char* p = (const char*) (_bstr_t) var;
A: Your problem (other than the possibility of a memory copy inside _bstr_t) is that you're converting the UNICODE BSTR into an ANSI char*.
You can use the USES_CONVERSION macros which perform the conversion on the stack, so they might be faster. Alternatively, keep the BSTR value as unicode if possible.
to convert:
USES_CONVERSION;
char* p = strdup(OLE2A(var.bstrVal));
// ...
free(p);
remember - the string returned from OLE2A (and its sister macros) return a string that is allocated on the stack - return from the enclosing scope and you have garbage string unless you copy it (and free it eventually, obviously)
A: The first 4 bytes of the BSTR contain the length. You can loop through and get every other character if unicode or every character if multibyte. Some sort of memcpy or other method would work too. IIRC, this can be faster than W2A or casting (LPCSTR)(_bstr_t)
A: This creates a temporary on the stack:
USES_CONVERSION;
char *p=W2A(var.bstrVal);
This uses a slightly newer syntax and is probably more robust. It has a configurable size, beyond which it will use the heap so it avoids putting massive strings onto the stack:
char *p=CW2AEX<>(var.bstrVal);
A: _variant_t var = pRs->Fields->GetItem(i)->GetValue();
You can also make this assignment quicker by avoiding the fields collection all together. You should only use the Fields collection when you need to retrieve the item by name. If you know the fields by index you can instead use this.
_variant_t vara = pRs->Collect[i]->Value;
Note i cannot be an integer as ADO does not support VT_INTEGER, so you might as well use a long variable.
A: Ok, my C++ is getting a little rusty... but I don't think the conversion is your problem. That conversion doesn't really do anything except tell the compiler to consider _bstr_t a char*. Then you're just assigning the address of that pointer to p. Nothing's actually being "done."
Are you sure it's not just slow getting stuff from GetValue?
Or is my C++ rustier than I think...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/117755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.